Chiori Hori

Chiori Hori
  • Biography

    Chiori has been a member of MERL's research team since 2015. Her work is focused on spoken dialog and audio visual scene-aware dialog technologies toward human-robot communications. She's on the editorial board of "Computer Speech and Language" and is a technical committee member of "Speech and Language Processing Group" of IEEE Signal Processing Society. Prior to joining MERL, Chiori spent 8 years at Japan's National Institute of Information and Communication Technology (NICT), where she held the position of Research Manager of the Spoken Language Communication Laboratory. She also spent time researching at Carnegie Mellon and the NTT Communication Science Laboratories, prior to NICT.

  • Recent News & Events

    •  TALK    [MERL Seminar Series 2023] Prof. Komei Sugiura presents talk titled The Confluence of Vision, Language, and Robotics
      Date & Time: Thursday, September 28, 2023; 12:00 PM
      Speaker: Komei Sugiura, Keio University
      MERL Host: Chiori Hori
      Research Areas: Artificial Intelligence, Machine Learning, Robotics, Speech & Audio
      Abstract
      • Recent advances in multimodal models that fuse vision and language are revolutionizing robotics. In this lecture, I will begin by introducing recent multimodal foundational models and their applications in robotics. The second topic of this talk will address our recent work on multimodal language processing in robotics. The shortage of home care workers has become a pressing societal issue, and the use of domestic service robots (DSRs) to assist individuals with disabilities is seen as a possible solution. I will present our work on DSRs that are capable of open-vocabulary mobile manipulation, referring expression comprehension and segmentation models for everyday objects, and future captioning methods for cooking videos and DSRs.
    •  
    •  NEWS    MERL congratulates Prof. Alex Waibel on receiving 2023 IEEE James L. Flanagan Speech and Audio Processing Award
      Date: August 22, 2022
      MERL Contacts: Chiori Hori; Jonathan Le Roux; Anthony Vetro
      Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
      Brief
      • IEEE has announced that the recipient of the 2023 IEEE James L. Flanagan Speech and Audio Processing Award will be Prof. Alex Waibel (CMU/Karlsruhe Institute of Technology), “For pioneering contributions to spoken language translation and supporting technologies.” Mitsubishi Electric Research Laboratories (MERL), which has become the new sponsor of this prestigious award in 2022, extends our warmest congratulations to Prof. Waibel.

        MERL Senior Principal Research Scientist Dr. Chiori Hori, who worked with Dr. Waibel at Carnegie Mellon University and collaborated with him as part of national projects on speech summarization and translation, comments on his invaluable contributions to the field: “He has contributed not only to the invention of groundbreaking technology in speech and spoken language processing but also to the promotion of an abundance of research projects through international research consortiums by linking American, European, and Asian research communities. Many of his former laboratory members and collaborators are now leading R&D in the AI field.”

        The IEEE Board of Directors established the IEEE James L. Flanagan Speech and Audio Processing Award in 2002 for outstanding contributions to the advancement of speech and/or audio signal processing. This award has recognized the contributions of some of the most renowned pioneers and leaders in their respective fields. MERL is proud to support the recognition of outstanding contributions to the field of speech and audio processing through its sponsorship of this award.
    •  

    See All News & Events for Chiori
  • Awards

    •  AWARD    Honorable Mention Award at NeurIPS 23 Instruction Workshop
      Date: December 15, 2023
      Awarded to: Lingfeng Sun, Devesh K. Jha, Chiori Hori, Siddharth Jain, Radu Corcodel, Xinghao Zhu, Masayoshi Tomizuka and Diego Romeres
      MERL Contacts: Radu Corcodel; Chiori Hori; Siddarth Jain; Devesh K. Jha; Diego Romeres
      Research Areas: Artificial Intelligence, Machine Learning, Robotics
      Brief
      • MERL Researchers received an "Honorable Mention award" at the Workshop on Instruction Tuning and Instruction Following at the NeurIPS 2023 conference in New Orleans. The workshop was on the topic of instruction tuning and Instruction following for Large Language Models (LLMs). MERL researchers presented their work on interactive planning using LLMs for partially observable robotic tasks during the oral presentation session at the workshop.
    •  
    •  AWARD    MERL team wins the Audio-Visual Speech Enhancement (AVSE) 2023 Challenge
      Date: December 16, 2023
      Awarded to: Zexu Pan, Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux
      MERL Contacts: François Germain; Chiori Hori; Sameer Khurana; Jonathan Le Roux; Zexu Pan; Gordon Wichern
      Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
      Brief
      • MERL's Speech & Audio team ranked 1st out of 12 teams in the 2nd COG-MHEAR Audio-Visual Speech Enhancement Challenge (AVSE). The team was led by Zexu Pan, and also included Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux.

        The AVSE challenge aims to design better speech enhancement systems by harnessing the visual aspects of speech (such as lip movements and gestures) in a manner similar to the brain’s multi-modal integration strategies. MERL’s system was a scenario-aware audio-visual TF-GridNet, that incorporates the face recording of a target speaker as a conditioning factor and also recognizes whether the predominant interference signal is speech or background noise. In addition to outperforming all competing systems in terms of objective metrics by a wide margin, in a listening test, MERL’s model achieved the best overall word intelligibility score of 84.54%, compared to 57.56% for the baseline and 80.41% for the next best team. The Fisher’s least significant difference (LSD) was 2.14%, indicating that our model offered statistically significant speech intelligibility improvements compared to all other systems.
    •  
    See All Awards for MERL
  • Research Highlights

  • Internships with Chiori

    • SA2073: Multimodal scene-understanding

      We are looking for a graduate student interested in helping advance the field of multimodal scene understanding, with a focus on scene understanding using natural language for robot dialog and/or indoor monitoring using a large language model. The intern will collaborate with MERL researchers to derive and implement new models and optimization methods, conduct experiments, and prepare results for publication. Internships regularly lead to one or more publications in top-tier venues, which can later become part of the intern''s doctoral work. The ideal candidates are senior Ph.D. students with experience in deep learning for audio-visual, signal, and natural language processing. Good programming skills in Python and knowledge of deep learning frameworks such as PyTorch are essential. Multiple positions are available with flexible start date (not just Spring/Summer but throughout 2024) and duration (typically 3-6 months).

    See All Internships at MERL
  • MERL Publications

    •  Pan, Z., Wichern, G., Masuyama, Y., Germain, F.G., Khurana, S., Hori, C., Le Roux, J., "Scenario-Aware Audio-Visual TF-GridNet for Target Speech Extraction", IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), DOI: 10.1109/​ASRU57964.2023.10389618, December 2023.
      BibTeX TR2023-152 PDF
      • @inproceedings{Pan2023dec2,
      • author = {Pan, Zexu and Wichern, Gordon and Masuyama, Yoshiki and Germain, François G and Khurana, Sameer and Hori, Chiori and Le Roux, Jonathan},
      • title = {Scenario-Aware Audio-Visual TF-GridNet for Target Speech Extraction},
      • booktitle = {IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)},
      • year = 2023,
      • month = dec,
      • doi = {10.1109/ASRU57964.2023.10389618},
      • isbn = {979-8-3503-0689-7},
      • url = {https://www.merl.com/publications/TR2023-152}
      • }
    •  Sun, L., Jha, D.K., Hori, C., Jain, S., Corcodel, R., Zhu, X., Tomizuka, M., Romeres, D., "Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks", Advances in Neural Information Processing Systems (NeurIPS) Workshop on Instruction Tuning and Instruction Following, December 2023.
      BibTeX TR2023-148 PDF
      • @inproceedings{Sun2023dec,
      • author = {Sun, Lingfeng and Jha, Devesh K. and Hori, Chiori and Jain, Siddarth and Corcodel, Radu and Zhu, Xinghao and Tomizuka, Masayoshi and Romeres, Diego},
      • title = {Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks},
      • booktitle = {Advances in Neural Information Processing Systems (NeurIPS) Workshop on Instruction Tuning and Instruction Following},
      • year = 2023,
      • month = dec,
      • url = {https://www.merl.com/publications/TR2023-148}
      • }
    •  Sun, L., Jha, D.K., Hori, C., Jain, S., Corcodel, R., Zhu, X., Tomizuka, M., Romeres, D., "Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks", arXiv, December 2023.
      BibTeX arXiv
      • @article{Sun2023dec2,
      • author = {Sun, Lingfeng and Jha, Devesh K. and Hori, Chiori and Jain, Siddarth and Corcodel, Radu and Zhu, Xinghao and Tomizuka, Masayoshi and Romeres, Diego},
      • title = {Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks},
      • journal = {arXiv},
      • year = 2023,
      • month = dec,
      • url = {https://arxiv.org/abs/2312.06876}
      • }
    •  Bralios, D., Wichern, G., Germain, F.G., Pan, Z., Khurana, S., Hori, C., Le Roux, J., "Generation or Replication: Auscultating Audio Latent Diffusion Models", arXiv, October 2023.
      BibTeX arXiv
      • @article{Bralios2023oct,
      • author = {Bralios, Dimitrios and Wichern, Gordon and Germain, François G and Pan, Zexu and Khurana, Sameer and Hori, Chiori and Le Roux, Jonathan},
      • title = {Generation or Replication: Auscultating Audio Latent Diffusion Models},
      • journal = {arXiv},
      • year = 2023,
      • month = oct,
      • url = {https://arxiv.org/abs/2310.10604}
      • }
    •  Yoshino, K., Chen, Y.-N., Crook, P., Kottur, S., Li, J., Hedayatnia, B., Moon, S., Fe, Z., Li, Z., Zhang, J., Fen, Y., Zhou, J., Kim, S., Liu, Y., Jin, D., Papangelis, A., Gopalakrishnan, K., Hakkani-Tur, D., Damavandi, B., Geramifard, A., <br /><br /> Hori, C., Shah, A., Zhang, C., Li, H., Sedoc, J., D’Haro, L.F., Banchs, R., Rudnicky, A., "Overview of the Tenth Dialog System Technology Challenge: DSTC10", IEE/ACM Transactions on Audio, Speech, and Language Processing, DOI: 10.1109/​TASLP.2023.3293030, pp. 1-14, August 2023.
      BibTeX TR2023-109 PDF
      • @article{Yoshino2023aug,
      • author = {Yoshino, Koichiro and Chen, Yun-Nung and Crook, Paul and Kottur, Satwik and Li, Jinchao and Hedayatnia, Behnam and Moon, Seungwhan and Fe, Zhengcong and Li, Zekang and Zhang, Jinchao and Fen, Yang and Zhou, Jie and Kim, Seokhwan and Liu, Yang and Jin, Di and Papangelis, Alexandros and Gopalakrishnan, Karthik and Hakkani-Tur, Dilek and Damavandi, Babak and Geramifard, Alborz and

        Hori, Chiori and Shah, Ankit and Zhang, Chen and Li, Haizhou and Sedoc, João and D’Haro, Luis F. and Banchs, Rafael and Rudnicky, Alexander},
      • title = {Overview of the Tenth Dialog System Technology Challenge: DSTC10},
      • journal = {IEE/ACM Transactions on Audio, Speech, and Language Processing},
      • year = 2023,
      • pages = {1--14},
      • month = aug,
      • doi = {10.1109/TASLP.2023.3293030},
      • issn = {2329-9290},
      • url = {https://www.merl.com/publications/TR2023-109}
      • }
    See All MERL Publications for Chiori
  • Videos

  • MERL Issued Patents

    • Title: "System and Method for Using Human Relationship Structures for Email Classification"
      Inventors: Harsham, Bret A.; Hori, Chiori
      Patent No.: 11,651,222
      Issue Date: May 16, 2023
    • Title: "Method and System for Scene-Aware Interaction"
      Inventors: Hori, Chiori; Cherian, Anoop; Chen, Siheng; Marks, Tim; Le Roux, Jonathan; Hori, Takaaki; Harsham, Bret A.; Vetro, Anthony; Sullivan, Alan
      Patent No.: 11,635,299
      Issue Date: Apr 25, 2023
    • Title: "Scene-Aware Video Encoder System and Method"
      Inventors: Cherian, Anoop; Hori, Chiori; Le Roux, Jonathan; Marks, Tim; Sullivan, Alan
      Patent No.: 11,582,485
      Issue Date: Feb 14, 2023
    • Title: "Low-latency Captioning System"
      Inventors: Hori, Chiori; Hori, Takaaki; Cherian, Anoop; Marks, Tim; Le Roux, Jonathan
      Patent No.: 11,445,267
      Issue Date: Sep 13, 2022
    • Title: "System and Method for a Dialogue Response Generation System"
      Inventors: Hori, Chiori; Cherian, Anoop; Marks, Tim; Hori, Takaaki
      Patent No.: 11,264,009
      Issue Date: Mar 1, 2022
    • Title: "Scene-Aware Video Dialog"
      Inventors: Geng, Shijie; Gao, Peng; Cherian, Anoop; Hori, Chiori; Le Roux, Jonathan
      Patent No.: 11,210,523
      Issue Date: Dec 28, 2021
    • Title: "Method and System for Multi-Label Classification"
      Inventors: Hori, Takaaki; Hori, Chiori; Watanabe, Shinji; Hershey, John R.; Harsham, Bret A.; Le Roux, Jonathan
      Patent No.: 11,086,918
      Issue Date: Aug 10, 2021
    • Title: "Position Estimation Under Multipath Transmission"
      Inventors: Kim, Kyeong-Jin; Orlik, Philip V.; Hori, Chiori
      Patent No.: 11,079,495
      Issue Date: Aug 3, 2021
    • Title: "Method and System for Multi-Modal Fusion Model"
      Inventors: Hori, Chiori; Hori, Takaaki; Hershey, John R.; Marks, Tim
      Patent No.: 10,417,498
      Issue Date: Sep 17, 2019
    • Title: "Method and System for Training Language Models to Reduce Recognition Errors"
      Inventors: Hori, Takaaki; Hori, Chiori; Watanabe, Shinji; Hershey, John R.
      Patent No.: 10,176,799
      Issue Date: Jan 8, 2019
    • Title: "Method and System for Role Dependent Context Sensitive Spoken and Textual Language Understanding with Neural Networks"
      Inventors: Hori, Chiori; Hori, Takaaki; Watanabe, Shinji; Hershey, John R.
      Patent No.: 9,842,106
      Issue Date: Dec 12, 2017
    See All Patents for MERL