Multi-Stream End-to-End Speech Recognition

    •  Li, R., Wang, X., Mallidi, H., Watanabe, S., Hori, T., Hermansky, H., "Multi-Stream End-to-End Speech Recognition", IEEE/ACM Transactions on Audio, Speech and Language Processing, DOI: 10.1109/​TASLP.2019.2959721, Vol. 28, pp. 646-655, March 2020.
      BibTeX TR2020-030 PDF
      • @article{Li2020mar,
      • author = {Li, Ruizhi and Wang, Xiaofei and Mallidi, Harish and Watanabe, Shinji and Hori, Takaaki and Hermansky, Hynek},
      • title = {Multi-Stream End-to-End Speech Recognition},
      • journal = {IEEE/ACM Transactions on Audio, Speech and Language Processing},
      • year = 2020,
      • volume = 28,
      • pages = {646--655},
      • month = mar,
      • doi = {10.1109/TASLP.2019.2959721},
      • url = {}
      • }
  • Research Areas:

    Artificial Intelligence, Machine Learning, Speech & Audio


Attention-based methods and Connectionist Temporal Classification (CTC) network have been promising research directions for end-to-end (E2E) Automatic Speech Recognition (ASR). The joint CTC/Attention model has achieved great success by utilizing both architectures during multi-task training and joint decoding. In this work, we present a multi-stream framework based on joint CTC/Attention E2E ASR with parallel streams represented by separate encoders aiming to capture diverse information. On top of the regular attention networks, the Hierarchical Attention Network (HAN) is introduced to steer the decoder toward the most informative encoders. A separate CTC network is assigned to each stream to force monotonic alignments. Two representative framework have been proposed and discussed, which are Multi-Encoder Multi-Resolution (MEMRes) framework and Multi-Encoder Multi-Array (MEM-Array) framework, respectively. In MEM-Res framework, two heterogeneous encoders with different architectures, temporal resolutions and separate CTC networks work in parallel to extract complementary information from same acoustics. Experiments are conducted on Wall Street Journal (WSJ) and CHiME4, resulting in relative Word Error Rate (WER) reduction of 18.0 - 32.1% and the best WER of 3.6% in the WSJ eval92 test set. The MEM-Array framework aims at improving the farfield ASR robustness using multiple microphone arrays which are activated by separate encoders. Compared with the best singlearray results, the proposed framework has achieved relative WER reduction of 3.7% and 9.7% in AMI and DIRHA multiarray corpora, respectively, which also outperforms conventional fusion strategies.