Chiori Hori

Chiori Hori
  • Biography

    Chiori has been a member of MERL's research team since 2015. Her work is focused on spoken dialog and audio visual scene-aware dialog technologies toward human-robot communications. She's on the editorial board of "Computer Speech and Language" and is a technical committee member of "Speech and Language Processing Group" of IEEE Signal Processing Society. Prior to joining MERL, Chiori spent 8 years at Japan's National Institute of Information and Communication Technology (NICT), where she held the position of Research Manager of the Spoken Language Communication Laboratory. She also spent time researching at Carnegie Mellon and the NTT Communication Science Laboratories, prior to NICT.

  • Recent News & Events

    •  EVENT    MERL Contributes to ICASSP 2024
      Date: Sunday, April 14, 2024 - Friday, April 19, 2024
      Location: Seoul, South Korea
      MERL Contacts: Petros T. Boufounos; François Germain; Chiori Hori; Sameer Khurana; Toshiaki Koike-Akino; Jonathan Le Roux; Hassan Mansour; Kieran Parsons; Joshua Rapp; Anthony Vetro; Pu (Perry) Wang; Gordon Wichern; Ryoma Yataka
      Research Areas: Artificial Intelligence, Computational Sensing, Machine Learning, Robotics, Signal Processing, Speech & Audio
      Brief
      • MERL has made numerous contributions to both the organization and technical program of ICASSP 2024, which is being held in Seoul, Korea from April 14-19, 2024.

        Sponsorship and Awards

        MERL is proud to be a Bronze Patron of the conference and will participate in the student job fair on Thursday, April 18. Please join this session to learn more about employment opportunities at MERL, including openings for research scientists, post-docs, and interns.

        MERL is pleased to be the sponsor of two IEEE Awards that will be presented at the conference. We congratulate Prof. Stéphane G. Mallat, the recipient of the 2024 IEEE Fourier Award for Signal Processing, and Prof. Keiichi Tokuda, the recipient of the 2024 IEEE James L. Flanagan Speech and Audio Processing Award.

        Jonathan Le Roux, MERL Speech and Audio Senior Team Leader, will also be recognized during the Awards Ceremony for his recent elevation to IEEE Fellow.

        Technical Program

        MERL will present 13 papers in the main conference on a wide range of topics including automated audio captioning, speech separation, audio generative models, speech and sound synthesis, spatial audio reproduction, multimodal indoor monitoring, radar imaging, depth estimation, physics-informed machine learning, and integrated sensing and communications (ISAC). Three workshop papers have also been accepted for presentation on audio-visual speaker diarization, music source separation, and music generative models.

        Perry Wang is the co-organizer of the Workshop on Signal Processing and Machine Learning Advances in Automotive Radars (SPLAR), held on Sunday, April 14. It features keynote talks from leaders in both academia and industry, peer-reviewed workshop papers, and lightning talks from ICASSP regular tracks on signal processing and machine learning for automotive radar and, more generally, radar perception.

        Gordon Wichern will present an invited keynote talk on analyzing and interpreting audio deep learning models at the Workshop on Explainable Machine Learning for Speech and Audio (XAI-SA), held on Monday, April 15. He will also appear in a panel discussion on interpretable audio AI at the workshop.

        Perry Wang also co-organizes a two-part special session on Next-Generation Wi-Fi Sensing (SS-L9 and SS-L13) which will be held on Thursday afternoon, April 18. The special session includes papers on PHY-layer oriented signal processing and data-driven deep learning advances, and supports upcoming 802.11bf WLAN Sensing Standardization activities.

        Petros Boufounos is participating as a mentor in ICASSP’s Micro-Mentoring Experience Program (MiME).

        About ICASSP

        ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 3000 participants.
    •  
    •  TALK    [MERL Seminar Series 2023] Prof. Komei Sugiura presents talk titled The Confluence of Vision, Language, and Robotics
      Date & Time: Thursday, September 28, 2023; 12:00 PM
      Speaker: Komei Sugiura, Keio University
      MERL Host: Chiori Hori
      Research Areas: Artificial Intelligence, Machine Learning, Robotics, Speech & Audio
      Abstract
      • Recent advances in multimodal models that fuse vision and language are revolutionizing robotics. In this lecture, I will begin by introducing recent multimodal foundational models and their applications in robotics. The second topic of this talk will address our recent work on multimodal language processing in robotics. The shortage of home care workers has become a pressing societal issue, and the use of domestic service robots (DSRs) to assist individuals with disabilities is seen as a possible solution. I will present our work on DSRs that are capable of open-vocabulary mobile manipulation, referring expression comprehension and segmentation models for everyday objects, and future captioning methods for cooking videos and DSRs.
    •  

    See All News & Events for Chiori
  • Awards

    •  AWARD    Honorable Mention Award at NeurIPS 23 Instruction Workshop
      Date: December 15, 2023
      Awarded to: Lingfeng Sun, Devesh K. Jha, Chiori Hori, Siddharth Jain, Radu Corcodel, Xinghao Zhu, Masayoshi Tomizuka and Diego Romeres
      MERL Contacts: Radu Corcodel; Chiori Hori; Siddarth Jain; Devesh K. Jha; Diego Romeres
      Research Areas: Artificial Intelligence, Machine Learning, Robotics
      Brief
      • MERL Researchers received an "Honorable Mention award" at the Workshop on Instruction Tuning and Instruction Following at the NeurIPS 2023 conference in New Orleans. The workshop was on the topic of instruction tuning and Instruction following for Large Language Models (LLMs). MERL researchers presented their work on interactive planning using LLMs for partially observable robotic tasks during the oral presentation session at the workshop.
    •  
    •  AWARD    MERL team wins the Audio-Visual Speech Enhancement (AVSE) 2023 Challenge
      Date: December 16, 2023
      Awarded to: Zexu Pan, Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux
      MERL Contacts: François Germain; Chiori Hori; Sameer Khurana; Jonathan Le Roux; Gordon Wichern
      Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
      Brief
      • MERL's Speech & Audio team ranked 1st out of 12 teams in the 2nd COG-MHEAR Audio-Visual Speech Enhancement Challenge (AVSE). The team was led by Zexu Pan, and also included Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux.

        The AVSE challenge aims to design better speech enhancement systems by harnessing the visual aspects of speech (such as lip movements and gestures) in a manner similar to the brain’s multi-modal integration strategies. MERL’s system was a scenario-aware audio-visual TF-GridNet, that incorporates the face recording of a target speaker as a conditioning factor and also recognizes whether the predominant interference signal is speech or background noise. In addition to outperforming all competing systems in terms of objective metrics by a wide margin, in a listening test, MERL’s model achieved the best overall word intelligibility score of 84.54%, compared to 57.56% for the baseline and 80.41% for the next best team. The Fisher’s least significant difference (LSD) was 2.14%, indicating that our model offered statistically significant speech intelligibility improvements compared to all other systems.
    •  
    See All Awards for MERL
  • Research Highlights

  • Internships with Chiori

    • SA2073: Multimodal scene-understanding

      We are looking for a graduate student interested in helping advance the field of multimodal scene understanding, with a focus on scene understanding using natural language for robot dialog and/or indoor monitoring using a large language model. The intern will collaborate with MERL researchers to derive and implement new models and optimization methods, conduct experiments, and prepare results for publication. Internships regularly lead to one or more publications in top-tier venues, which can later become part of the intern''s doctoral work. The ideal candidates are senior Ph.D. students with experience in deep learning for audio-visual, signal, and natural language processing. Good programming skills in Python and knowledge of deep learning frameworks such as PyTorch are essential. Multiple positions are available with flexible start date (not just Spring/Summer but throughout 2024) and duration (typically 3-6 months).

    • SA2181: Autonomous mobile robot data collection

      MERL is seeking a highly motivated intern to collaborate in the collection of data for sensing, planning, and control methods in a robotic test-bed using Turtlebots at MERL. The ideal candidate is enrolled in a Masters/PhD program in Electrical, Mechanical, Aerospace Engineering, Robotics, Computer Science, or related program, with prior experience in motion planning, control, optimization, computer vision, and their application in mobile robots, including experimental validation. The candidate should be proficient in ROS, C/C++, and Python. The expected duration of the internship is 1-2 months, with a flexible start date in early summer to fall.

    See All Internships at MERL
  • MERL Publications

    •  Sun, L., Jha, D.K., Hori, C., Jain, S., Corcodel, R., Zhu, X., Tomizuka, M., Romeres, D., "Interactive Planning Using Large Language Models for Partially Observable Robotic Tasks", IEEE International Conference on Robotics and Automation (ICRA), May 2024.
      BibTeX TR2024-052 PDF
      • @inproceedings{Sun2024may,
      • author = {Sun, Lingfeng and Jha, Devesh K. and Hori, Chiori and Jain, Siddarth and Corcodel, Radu and Zhu, Xinghao and Tomizuka, Masayoshi and Romeres, Diego}},
      • title = {Interactive Planning Using Large Language Models for Partially Observable Robotic Tasks},
      • booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
      • year = 2024,
      • month = may,
      • url = {https://www.merl.com/publications/TR2024-052}
      • }
    •  Bralios, D., Wichern, G., Germain, F.G., Pan, Z., Khurana, S., Hori, C., Le Roux, J., "Generation or Replication: Auscultating Audio Latent Diffusion Models", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2024.
      BibTeX TR2024-027 PDF
      • @inproceedings{Bralios2024mar,
      • author = {Bralios, Dimitrios and Wichern, Gordon and Germain, François G and Pan, Zexu and Khurana, Sameer and Hori, Chiori and Le Roux, Jonathan},
      • title = {Generation or Replication: Auscultating Audio Latent Diffusion Models},
      • booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
      • year = 2024,
      • month = mar,
      • url = {https://www.merl.com/publications/TR2024-027}
      • }
    •  Masuyama, Y., Wichern, G., Germain, F.G., Pan, Z., Khurana, S., Hori, C., Le Roux, J., "NIIRF: Neural IIR Filter Field for HRTF Upsampling and Personalization", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2024.
      BibTeX TR2024-026 PDF
      • @inproceedings{Masuyama2024mar,
      • author = {Masuyama, Yoshiki and Wichern, Gordon and Germain, François G and Pan, Zexu and Khurana, Sameer and Hori, Chiori and Le Roux, Jonathan},
      • title = {NIIRF: Neural IIR Filter Field for HRTF Upsampling and Personalization},
      • booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
      • year = 2024,
      • month = mar,
      • url = {https://www.merl.com/publications/TR2024-026}
      • }
    •  Hori, C., Wang, P., Rahman, M., Vaca-Rubio, C., Khurana, S., Cherian, A., Le Roux, J., "Wi-Fi based Indoor Monitoring Enhanced by Multimodal Fusion", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), DOI: 10.1109/​ICASSP48485.2024.10447600, March 2024, pp. 13296-13300.
      BibTeX TR2024-012 PDF
      • @inproceedings{Hori2024mar,
      • author = {Hori, Chiori and Wang, Pu and Rahman, Mahbub and Vaca-Rubio, Cristian and Khurana, Sameer and Cherian, Anoop and Le Roux, Jonathan},
      • title = {Wi-Fi based Indoor Monitoring Enhanced by Multimodal Fusion},
      • booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
      • year = 2024,
      • pages = {13296--13300},
      • month = mar,
      • publisher = {IEEE},
      • doi = {10.1109/ICASSP48485.2024.10447600},
      • issn = {2379-190X},
      • isbn = {979-8-3503-4485-1},
      • url = {https://www.merl.com/publications/TR2024-012}
      • }
    •  Pan, Z., Wichern, G., Masuyama, Y., Germain, F.G., Khurana, S., Hori, C., Le Roux, J., "Scenario-Aware Audio-Visual TF-GridNet for Target Speech Extraction", IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), DOI: 10.1109/​ASRU57964.2023.10389618, December 2023.
      BibTeX TR2023-152 PDF
      • @inproceedings{Pan2023dec2,
      • author = {Pan, Zexu and Wichern, Gordon and Masuyama, Yoshiki and Germain, François G and Khurana, Sameer and Hori, Chiori and Le Roux, Jonathan},
      • title = {Scenario-Aware Audio-Visual TF-GridNet for Target Speech Extraction},
      • booktitle = {IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)},
      • year = 2023,
      • month = dec,
      • doi = {10.1109/ASRU57964.2023.10389618},
      • isbn = {979-8-3503-0689-7},
      • url = {https://www.merl.com/publications/TR2023-152}
      • }
    See All MERL Publications for Chiori
  • Software & Data Downloads

  • Videos

  • MERL Issued Patents

    • Title: "Long-context End-to-end Speech Recognition System"
      Inventors: Hori, Takaaki; Moritz, Niko; Hori, Chiori; Le Roux, Jonathan
      Patent No.: 11,978,435
      Issue Date: May 7, 2024
    • Title: "System and Method for Using Human Relationship Structures for Email Classification"
      Inventors: Harsham, Bret A.; Hori, Chiori
      Patent No.: 11,651,222
      Issue Date: May 16, 2023
    • Title: "Method and System for Scene-Aware Interaction"
      Inventors: Hori, Chiori; Cherian, Anoop; Chen, Siheng; Marks, Tim; Le Roux, Jonathan; Hori, Takaaki; Harsham, Bret A.; Vetro, Anthony; Sullivan, Alan
      Patent No.: 11,635,299
      Issue Date: Apr 25, 2023
    • Title: "Scene-Aware Video Encoder System and Method"
      Inventors: Cherian, Anoop; Hori, Chiori; Le Roux, Jonathan; Marks, Tim; Sullivan, Alan
      Patent No.: 11,582,485
      Issue Date: Feb 14, 2023
    • Title: "Low-latency Captioning System"
      Inventors: Hori, Chiori; Hori, Takaaki; Cherian, Anoop; Marks, Tim; Le Roux, Jonathan
      Patent No.: 11,445,267
      Issue Date: Sep 13, 2022
    • Title: "System and Method for a Dialogue Response Generation System"
      Inventors: Hori, Chiori; Cherian, Anoop; Marks, Tim; Hori, Takaaki
      Patent No.: 11,264,009
      Issue Date: Mar 1, 2022
    • Title: "Scene-Aware Video Dialog"
      Inventors: Geng, Shijie; Gao, Peng; Cherian, Anoop; Hori, Chiori; Le Roux, Jonathan
      Patent No.: 11,210,523
      Issue Date: Dec 28, 2021
    • Title: "Method and System for Multi-Label Classification"
      Inventors: Hori, Takaaki; Hori, Chiori; Watanabe, Shinji; Hershey, John R.; Harsham, Bret A.; Le Roux, Jonathan
      Patent No.: 11,086,918
      Issue Date: Aug 10, 2021
    • Title: "Position Estimation Under Multipath Transmission"
      Inventors: Kim, Kyeong-Jin; Orlik, Philip V.; Hori, Chiori
      Patent No.: 11,079,495
      Issue Date: Aug 3, 2021
    • Title: "Method and System for Multi-Modal Fusion Model"
      Inventors: Hori, Chiori; Hori, Takaaki; Hershey, John R.; Marks, Tim
      Patent No.: 10,417,498
      Issue Date: Sep 17, 2019
    • Title: "Method and System for Training Language Models to Reduce Recognition Errors"
      Inventors: Hori, Takaaki; Hori, Chiori; Watanabe, Shinji; Hershey, John R.
      Patent No.: 10,176,799
      Issue Date: Jan 8, 2019
    • Title: "Method and System for Role Dependent Context Sensitive Spoken and Textual Language Understanding with Neural Networks"
      Inventors: Hori, Chiori; Hori, Takaaki; Watanabe, Shinji; Hershey, John R.
      Patent No.: 9,842,106
      Issue Date: Dec 12, 2017
    See All Patents for MERL