TR2017-133

Semi-Supervised Learning of a Pronunciation Dictionary from Disjoint Phonemic Transcripts and Text


    •  Shinozaki, T., Watanabe, S., Mochihashi, D., Neubig, G., "Semi-Supervised Learning of a Pronunciation Dictionary from Disjoint Phonemic Transcripts and Text", Interspeech, August 2017.
      BibTeX TR2017-133 PDF
      • @inproceedings{Shinozaki2017aug,
      • author = {Shinozaki, Takahiro and Watanabe, Shinji and Mochihashi, Daichi and Neubig, Graham},
      • title = {Semi-Supervised Learning of a Pronunciation Dictionary from Disjoint Phonemic Transcripts and Text},
      • booktitle = {Interspeech},
      • year = 2017,
      • month = aug,
      • url = {https://www.merl.com/publications/TR2017-133}
      • }
  • Research Areas:

    Artificial Intelligence, Speech & Audio

Abstract:

While the performance of automatic speech recognition systems has recently approached human levels in some tasks, the application is still limited to specific domains. This is because system development relies on extensive supervised training and expert tuning in the target domain. To solve this problem, systems must become more self-sufficient, having the ability to learn directly from speech and adapt to new tasks. One open question in this area is how to learn a pronunciation dictionary containing the appropriate vocabulary. Humans can recognize words, even ones they have never heard before, by reading text and understanding the context in which a word is used. However, this ability is missing in current speech recognition systems. In this work, we propose a new framework that automatically expands an initial pronunciation dictionary using independently sampled acoustic and textual data. While the task is very challenging and in its initial stage, we demonstrate that a model based on Bayesian learning of Dirichlet processes can acquire word pronunciations from phone transcripts and text of the WSJ data set.