Speech & Audio

Audio source separation, recognition, and understanding.

Our current research focuses on application of machine learning to estimation and inference problems in speech and audio processing. Topics include end-to-end speech recognition and enhancement, acoustic modeling and analysis, statistical dialog systems, as well as natural language understanding and adaptive multimodal interfaces.

  • Researchers

  • Awards

    •  AWARD   Best Student Paper Award at IEEE ICASSP 2018
      Date: April 17, 2018
      Awarded to:
      MERL Contact: Jonathan Le Roux
      Research Area: Speech & Audio
      Brief
      • Former MERL intern Zhong-Qiu Wang (Ph.D. Candidate at Ohio State University) has received a Best Student Paper Award at the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018) for the paper "Multi-Channel Deep Clustering: Discriminative Spectral and Spatial Embeddings for Speaker-Independent Speech Separation" by Zhong-Qiu Wang, Jonathan Le Roux, and John Hershey. The paper presents work performed during Zhong-Qiu's internship at MERL in the summer 2017, extending MERL's pioneering Deep Clustering framework for speech separation to a multi-channel setup. The award was received on behalf on Zhong-Qiu by MERL researcher and co-author Jonathan Le Roux during the conference, held in Calgary April 15-20.
    •  
    •  AWARD   MERL's Speech Team Achieves World's 2nd Best Performance at the Third CHiME Speech Separation and Recognition Challenge
      Date: December 15, 2015
      Awarded to:
      MERL Contacts: Takaaki Hori; Jonathan Le Roux
      Research Area: Speech & Audio
      Brief
      • The results of the third 'CHiME' Speech Separation and Recognition Challenge were publicly announced on December 15 at the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2015) held in Scottsdale, Arizona, USA. MERL's Speech and Audio Team, in collaboration with SRI, ranked 2nd out of 26 teams from Europe, Asia and the US. The task this year was to recognize speech recorded using a tablet in real environments such as cafes, buses, or busy streets. Due to the high levels of noise and the distance from the speaker's mouth to the microphones, this is very challenging task, where the baseline system only achieved 33.4% word error rate. The MERL/SRI system featured state-of-the-art techniques including multi-channel front-end, noise-robust feature extraction, and deep learning for speech enhancement, acoustic modeling, and language modeling, leading to a dramatic 73% reduction in word error rate, down to 9.1%. The core of the system has since been released as a new official challenge baseline for the community to use.
    •  
    •  AWARD   Awaya Prize Young Researcher Award
      Date: March 11, 2014
      Awarded to:
      MERL Contact: Jonathan Le Roux
      Research Area: Speech & Audio
      Brief
      • MELCO researcher Yuuki Tachioka received the Awaya Prize Young Researcher Award from the Acoustical Society of Japan (ASJ) for "effectiveness of discriminative approaches for speech recognition under noisy environments on the 2nd CHiME Challenge", which was based on joint work with MERL Speech & Audio team researchers Shinji Watanabe, Jonathan Le Roux and John R. Hershey.
    •  

    See All Awards for Speech & Audio
  • News & Events

    •  EVENT   SANE 2018 - Speech and Audio in the Northeast
      Date: Thursday, October 18, 2018
      MERL Contacts: Takaaki Hori; Jonathan Le Roux
      Location: Google, Cambridge, MA
      Research Area: Speech & Audio
      Brief
      • SANE 2018, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, will be held on Thursday October 18, 2018 at Google, in Cambridge, MA. MERL is one of the organizers and sponsors of the workshop.

        It is the 7th edition in the SANE series of workshops, which started at MERL in 2012. Since the first edition, the audience has steadily grown, with a record 180 participants in 2017.

        SANE 2018 will feature invited talks by leading researchers from the Northeast, as well as from the international community. It will also feature a lively poster session, open to both students and researchers.
    •  
    •  NEWS   Takaaki Hori leads speech technology workshop
      Date: June 25, 2018 - August 3, 2018
      Where: Johns Hopkins University, Baltimore, MD
      MERL Contacts: Takaaki Hori; Jonathan Le Roux
      Research Area: Speech & Audio
      Brief
      • MERL Speech & Audio Team researcher Takaaki Hori led a team of 27 senior researchers and Ph.D. students from different organizations around the world, working on "Multi-lingual End-to-End Speech Recognition for Incomplete Data" as part of the Jelinek Memorial Summer Workshop on Speech and Language Technology (JSALT). The JSALT workshop is a renowned 6-week hands-on workshop held yearly since 1995. This year, the workshop was held at Johns Hopkins University in Baltimore from June 25 to August 3, 2018. Takaaki's team developed new methods for end-to-end Automatic Speech Recognition (ASR) with a focus on low-resource languages with limited labelled data.

        End-to-end ASR can significantly reduce the burden of developing ASR systems for new languages, by eliminating the need for linguistic information such as pronunciation dictionaries. Some end-to-end systems have recently achieved performance comparable to or better than conventional systems in several tasks. However, the current model training algorithms basically require paired data, i.e., speech data and the corresponding transcription. Sufficient amount of such complete data is usually unavailable for minor languages, and creating such data sets is very expensive and time consuming.

        The goal of Takaaki's team project was to expand the applicability of end-to-end models to multilingual ASR, and to develop new technology that would make it possible to build highly accurate systems even for low-resource languages without a large amount of paired data. Some major accomplishments of the team include building multi-lingual end-to-end ASR systems for 17 languages, developing novel architectures and training methods for end-to-end ASR, building end-to-end ASR-TTS (Text-to-speech) chain for unpaired data training, and developing ESPnet, an open-source end-to-end speech processing toolkit. Three papers stemming from the team's work have already been accepted to the 2018 IEEE Spoken Language Technology Workshop (SLT), with several more to be submitted to upcoming conferences.
    •  

    See All News & Events for Speech & Audio
  • Research Highlights

  • Internships

    • SA1132: End-to-end acoustic analysis recognition and inference

      MERL is looking for an intern to work on fundamental research in the area of end-to-end acoustic analysis, recognition, and inference using machine learning techniques such as deep learning. The intern will collaborate with MERL researchers to derive and implement new models and optimization methods, conduct experiments, and prepare results for high impact publication. The ideal candidate would be a senior Ph.D. student with experience in one or more of source separation, speech recognition, and natural language processing including practical machine learning algorithms with related programming skills. The duration of the internship is expected to be 3-6 months.


    See All Internships for Speech & Audio
  • Recent Publications

    •  Seki, H., Hori, T., Watanabe, S., Le Roux, J., Hershey, J., "A Purely End-to-end System for Multi-speaker Speech Recognition", Annual Meeting of the Association for Computational Linguistics (ACL), Jul 16, 2018.
    •  Hori, C., Alamri, H., Wang, J., Wichern, G., Hori, T., Cherian, A., Marks, T.K., Cartillier, V., Lopes, R., Das, A., Essa, I., Batra, D., Parikh, D., "End-to-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video Features", arXiv, July 13, 2018.
      BibTeX Download PDFAbout TR2018-085
      • @techreport{MERL_TR2018-085,
      • author = {Hori, C. and Alamri, H. and Wang, J. and Wichern, G. and Hori, T. and Cherian, A. and Marks, T.K. and Cartillier, V. and Lopes, R. and Das, A. and Essa, I. and Batra, D. and Parikh, D.},
      • title = {End-to-End Audio Visual Scene-Aware Dialog using Multimodal Attention-Based Video Features},
      • institution = {MERL - Mitsubishi Electric Research Laboratories},
      • address = {Cambridge, MA 02139},
      • number = {TR2018-085},
      • month = jul,
      • year = 2018,
      • url = {http://www.merl.com/publications/TR2018-085/}
      • }
    •  Alamri, H., Cartillier, V., Lopes, R., Das, A., Wang, J., Essa, I., Batra, D., Parikh, D., Cherian, A., Marks, T.K., Hori, C., "Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7", arXiv, July 12, 2018.
      BibTeX Download PDFAbout TR2018-069
      • @techreport{MERL_TR2018-069,
      • author = {Alamri, H. and Cartillier, V. and Lopes, R. and Das, A. and Wang, J. and Essa, I. and Batra, D. and Parikh, D. and Cherian, A. and Marks, T.K. and Hori, C.},
      • title = {Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7},
      • institution = {MERL - Mitsubishi Electric Research Laboratories},
      • address = {Cambridge, MA 02139},
      • number = {TR2018-069},
      • month = jul,
      • year = 2018,
      • url = {http://www.merl.com/publications/TR2018-069/}
      • }
    •  Seki, H., Hori, T., Watanabe, S., Le Roux, J., Hershey, J., "A Purely End-to-end System for Multi-speaker Speech Recognition", arXiv, July 10, 2018.
      BibTeX Download PDFAbout TR2018-058
      • @techreport{MERL_TR2018-058,
      • author = {Seki, H. and Hori, T. and Watanabe, S. and Le Roux, J. and Hershey, J.},
      • title = {A Purely End-to-end System for Multi-speaker Speech Recognition},
      • institution = {MERL - Mitsubishi Electric Research Laboratories},
      • address = {Cambridge, MA 02139},
      • number = {TR2018-058},
      • month = jul,
      • year = 2018,
      • url = {http://www.merl.com/publications/TR2018-058/}
      • }
    •  Wang, Z.-Q., Le Roux, J., Hershey, J., "End-to-End Speech Separation with Unfolded Iterative Phase Reconstruction", arXiv, July 9, 2018.
      BibTeX Download PDFAbout TR2018-051
      • @techreport{MERL_TR2018-051,
      • author = {Wang, Z.-Q. and Le Roux, J. and Hershey, J.},
      • title = {End-to-End Speech Separation with Unfolded Iterative Phase Reconstruction},
      • institution = {MERL - Mitsubishi Electric Research Laboratories},
      • address = {Cambridge, MA 02139},
      • number = {TR2018-051},
      • month = jul,
      • year = 2018,
      • url = {http://www.merl.com/publications/TR2018-051/}
      • }
    •  Ochiai, T., Watanabe, S., Katagiri, S., Hori, T., Hershey, J.R., "Speaker Adaptation for Multichannel End-to-End Speech Recognition", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2018.
    •  Seki, H., Watanabe, S., Hori, T., Le Roux, J., Hershey, J.R., "An End-to-End Language-Tracking Speech Recognizer for Mixed-Language Speech", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2018.
    See All Publications for Speech & Audio
  • Videos

  • Free Downloads