TR2018-176

End-to-End Speech Recognition with Word-Based RNN Language Models


    •  Hori, T., Cho, J., Watanabe, S., "End-to-End Speech Recognition with Word-Based RNN Language Models", IEEE Spoken Language Technology Workshop (SLT), DOI: 10.1109/​SLT.2018.8639693, December 2018.
      BibTeX TR2018-176 PDF
      • @inproceedings{Hori2018dec,
      • author = {Hori, Takaaki and Cho, Jaejin and Watanabe, Shinji},
      • title = {End-to-End Speech Recognition with Word-Based RNN Language Models},
      • booktitle = {IEEE Spoken Language Technology Workshop (SLT)},
      • year = 2018,
      • month = dec,
      • doi = {10.1109/SLT.2018.8639693},
      • url = {https://www.merl.com/publications/TR2018-176}
      • }
  • Research Areas:

    Machine Learning, Speech & Audio

Abstract:

This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNNLMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel wordbased RNN-LM, which allows us to decode with only the wordbased LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 %WER for WSJ Eval’92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.