TR2015-100

Speech enhancement and recognition using multi-task learning of long short-term memory recurrent neural networks


    •  Chen, Z., Watanabe, S., Erdogan, H., Hershey, J.R., "Speech Enhancement and Recognition Using Multi-Task Learning of Long Short-Term Memory Recurrent Neural Networks", Interspeech, September 2015, vol. 1 of 5, pp. 1278.
      BibTeX TR2015-100 PDF
      • @inproceedings{Chen2015sep,
      • author = {Chen, Z. and Watanabe, S. and Erdogan, H. and Hershey, J.R.},
      • title = {Speech Enhancement and Recognition Using Multi-Task Learning of Long Short-Term Memory Recurrent Neural Networks},
      • booktitle = {Interspeech},
      • year = 2015,
      • volume = {1 of 5},
      • pages = 1278,
      • month = sep,
      • isbn = {978-1-5108-1790-6},
      • url = {https://www.merl.com/publications/TR2015-100}
      • }
  • Research Areas:

    Artificial Intelligence, Speech & Audio

Abstract:

Long Short-Term Memory (LSTM) recurrent neural network has proven effective in modeling speech and has achieved outstanding performance in both speech enhancement (SE) and automatic speech recognition (ASR). To further improve the performance of noise-robust speech recognition, a combination of speech enhancement and recognition was shown to be promising in earlier work. This paper aims to explore options for consistent integration of SE and ASR using LSTM networks. Since SE and ASR have different objective criteria, it is not clear what kind of integration would finally lead to the best word error rate for noise-robust ASR tasks. In this work, several integration architectures are proposed and tested, including: (1) a pipeline architecture of LSTM-based SE and ASR with sequence training, (2) an alternating estimation architecture, and (3) a multi-task hybrid LSTM network architecture. The proposed models were evaluated on the 2nd CHiME speech separation and recognition challenge task, and show significant improvements relative to prior results.