Artificial Intelligence
Making machines smarter for improved safety, efficiency and comfort.
Our AI research encompasses advances in computer vision, speech and audio processing, as well as data analytics. Key research themes include improved perception based on machine learning techniques, learning control policies through model-based reinforcement learning, as well as cognition and reasoning based on learned semantic representations. We apply our work to a broad range of automotive and robotics applications, as well as building and home systems.
Quick Links
-
Researchers
Jonathan
Le Roux
Toshiaki
Koike-Akino
Ye
Wang
Gordon
Wichern
Anoop
Cherian
Tim K.
Marks
Chiori
Hori
Michael J.
Jones
Kieran
Parsons
François
Germain
Daniel N.
Nikovski
Devesh K.
Jha
Jing
Liu
Suhas
Lohit
Matthew
Brand
Philip V.
Orlik
Diego
Romeres
Pu
(Perry)
WangPetros T.
Boufounos
Moitreya
Chatterjee
Siddarth
Jain
Hassan
Mansour
Kuan-Chuan
Peng
William S.
Yerazunis
Radu
Corcodel
Yoshiki
Masuyama
Arvind
Raghunathan
Pedro
Miraldo
Hongbo
Sun
Yebin
Wang
Ankush
Chakrabarty
Jianlin
Guo
Chungwei
Lin
Yanting
Ma
Bingnan
Wang
Ryo
Aihara
Stefano
Di Cairano
Saviz
Mowlavi
Anthony
Vetro
Jinyun
Zhang
Vedang M.
Deshpande
Christopher R.
Laughman
Dehong
Liu
Alexander
Schperberg
Wataru
Tsujita
Abraham P.
Vinod
Kenji
Inomata
Na
Li
-
Awards
-
AWARD MERL Wins Awards at NeurIPS LLM Privacy Challenge Date: December 15, 2024
Awarded to: Jing Liu, Ye Wang, Toshiaki Koike-Akino, Tsunato Nakai, Kento Oonishi, Takuya Higashi
MERL Contacts: Toshiaki Koike-Akino; Jing Liu; Ye Wang
Research Areas: Artificial Intelligence, Machine Learning, Information SecurityBrief- The Mitsubishi Electric Privacy Enhancing Technologies (MEL-PETs) team, consisting of a collaboration of MERL and Mitsubishi Electric researchers, won awards at the NeurIPS 2024 Large Language Model (LLM) Privacy Challenge. In the Blue Team track of the challenge, we won the 3rd Place Award, and in the Red Team track, we won the Special Award for Practical Attack.
-
AWARD University of Padua and MERL team wins the AI Olympics with RealAIGym competition at IROS24 Date: October 17, 2024
Awarded to: Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres
MERL Contact: Diego Romeres
Research Areas: Artificial Intelligence, Dynamical Systems, Machine Learning, RoboticsBrief- The team composed of the control group at the University of Padua and MERL's Optimization and Robotic team ranked 1st out of the 4 finalist teams that arrived to the 2nd AI Olympics with RealAIGym competition at IROS 24, which focused on control of under-actuated robots. The team was composed by Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli and Diego Romeres. The competition was organized by the German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt and Chalmers University of Technology.
The competition and award ceremony was hosted by IEEE International Conference on Intelligent Robots and Systems (IROS) on October 17, 2024 in Abu Dhabi, UAE. Diego Romeres presented the team's method, based on a model-based reinforcement learning algorithm called MC-PILCO.
- The team composed of the control group at the University of Padua and MERL's Optimization and Robotic team ranked 1st out of the 4 finalist teams that arrived to the 2nd AI Olympics with RealAIGym competition at IROS 24, which focused on control of under-actuated robots. The team was composed by Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli and Diego Romeres. The competition was organized by the German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt and Chalmers University of Technology.
-
AWARD MERL team wins the Listener Acoustic Personalisation (LAP) 2024 Challenge Date: August 29, 2024
Awarded to: Yoshiki Masuyama, Gordon Wichern, Francois G. Germain, Christopher Ick, and Jonathan Le Roux
MERL Contacts: François Germain; Jonathan Le Roux; Gordon Wichern; Yoshiki Masuyama
Research Areas: Artificial Intelligence, Machine Learning, Speech & AudioBrief- MERL's Speech & Audio team ranked 1st out of 7 teams in Task 2 of the 1st SONICOM Listener Acoustic Personalisation (LAP) Challenge, which focused on "Spatial upsampling for obtaining a high-spatial-resolution HRTF from a very low number of directions". The team was led by Yoshiki Masuyama, and also included Gordon Wichern, Francois Germain, MERL intern Christopher Ick, and Jonathan Le Roux.
The LAP Challenge workshop and award ceremony was hosted by the 32nd European Signal Processing Conference (EUSIPCO 24) on August 29, 2024 in Lyon, France. Yoshiki Masuyama presented the team's method, "Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization", and received the award from Prof. Michele Geronazzo (University of Padova, IT, and Imperial College London, UK), Chair of the Challenge's Organizing Committee.
The LAP challenge aims to explore challenges in the field of personalized spatial audio, with the first edition focusing on the spatial upsampling and interpolation of head-related transfer functions (HRTFs). HRTFs with dense spatial grids are required for immersive audio experiences, but their recording is time-consuming. Although HRTF spatial upsampling has recently shown remarkable progress with approaches involving neural fields, HRTF estimation accuracy remains limited when upsampling from only a few measured directions, e.g., 3 or 5 measurements. The MERL team tackled this problem by proposing a retrieval-augmented neural field (RANF). RANF retrieves a subject whose HRTFs are close to those of the target subject at the measured directions from a library of subjects. The HRTF of the retrieved subject at the target direction is fed into the neural field in addition to the desired sound source direction. The team also developed a neural network architecture that can handle an arbitrary number of retrieved subjects, inspired by a multi-channel processing technique called transform-average-concatenate.
- MERL's Speech & Audio team ranked 1st out of 7 teams in Task 2 of the 1st SONICOM Listener Acoustic Personalisation (LAP) Challenge, which focused on "Spatial upsampling for obtaining a high-spatial-resolution HRTF from a very low number of directions". The team was led by Yoshiki Masuyama, and also included Gordon Wichern, Francois Germain, MERL intern Christopher Ick, and Jonathan Le Roux.
See All Awards for Artificial Intelligence -
-
News & Events
-
EVENT SANE 2023 - Speech and Audio in the Northeast Date: Thursday, October 26, 2023
Location: New York University, Brooklyn, New York, NY
MERL Contacts: Jonathan Le Roux; Gordon Wichern
Research Areas: Artificial Intelligence, Machine Learning, Speech & AudioBrief- SANE 2023, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, was held on Thursday October 26, 2023 at NYU in Brooklyn, New York.
It was the 10th edition in the SANE series of workshops, which started in 2012 and is typically held every year alternately in Boston and New York. Since the first edition, the audience has steadily grown, and SANE 2023 broke SANE 2019's record with 200 participants and 51 posters.
This year's SANE took place in conjunction with the WASPAA workshop, held October 22-25 in upstate New York.
SANE 2023 featured invited talks by seven leading researchers from the Northeast and beyond: Arsha Nagrani (Google), Gaël Richard (Télécom Paris), Gordon Wichern (MERL), Kyunghyun Cho (NYU / Prescient Design), Anna Huang (Google DeepMind / MILA), Wenwu Wang (University of Surrey), and Yuan Gong (MIT). It also featured a lively poster session with 51 posters.
SANE 2023 was co-organized by Jonathan Le Roux (MERL), Juan P. Bello (NYU), and John R. Hershey (Google). SANE remained a free event thanks to generous sponsorship by NYU, MERL, Google, Adobe, Bose, Meta Reality Labs, and Amazon.
Slides and videos of the talks are available from the SANE workshop website.
- SANE 2023, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, was held on Thursday October 26, 2023 at NYU in Brooklyn, New York.
-
NEWS Suhas Lohit presents invited talk at Boston Symmetry Day 2025 Date: March 31, 2025
Where: Northeastern University, Boston, MA
MERL Contact: Suhas Lohit
Research Areas: Artificial Intelligence, Computer Vision, Machine LearningBrief- MERL researcher Suhas Lohit was an invited speaker at Boston Symmetry Day, held at Northeastern University. Boston Symmetry Day, an annual workshop organized by researchers at MIT and Northeastern, brought together attendees interested in symmetry-informed machine learning and its applications. Suhas' talk, titled “Efficiency for Equivariance, and Efficiency through Equivariance” discussed recent MERL works that show how to build general and efficient equivariant neural networks, and how equivariance can be utilized in self-supervised learning to yield improved 3D object detection. The abstract and slides can be found in the link below.
See All News & Events for Artificial Intelligence -
-
Research Highlights
-
PS-NeuS: A Probability-guided Sampler for Neural Implicit Surface Rendering -
Quantum AI Technology -
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models -
Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-Aware Spatio-Temporal Sampling -
Steered Diffusion -
Sustainable AI -
Robust Machine Learning -
mmWave Beam-SNR Fingerprinting (mmBSF) -
Video Anomaly Detection -
Biosignal Processing for Human-Machine Interaction -
Task-aware Unified Source Separation - Audio Examples
-
-
Internships
-
CA0129: Internship - LLM-guided Active SLAM for Mobile Robots
MERL is seeking interns passionate about robotics to contribute to the development of an Active Simultaneous Localization and Mapping (Active SLAM) framework guided by Large Language Models (LLM). The core objective is to achieve autonomous behavior for mobile robots. The methods will be implemented and evaluated in high performance simulators and (time-permitting) in actual robotic platforms, such as legged and wheeled robots. The expectation at the end of the internship is a publication at a top-tier robotic or computer vision conference and/or journal.
The internship has a flexible start date (Spring/Summer 2025), with a duration of 3-6 months depending on agreed scope and intermediate progress.
Required Specific Experience
- Current/Past Enrollment in a PhD Program in Computer Engineering, Computer Science, Electrical Engineering, Mechanical Engineering, or related field
- Experience with employing and fine-tuning LLM and/or Visual Language Models (VLM) for high-level context-aware planning and navigation
- 2+ years experience with 3D computer vision (e.g., point cloud, voxels, camera pose estimation) and mapping, filter-based methods (e.g., EKF), and in at least some of: motion planning algorithms, factor graphs, control, and optimization
- Excellent programming skills in Python and/or C/C++, with prior knowledge in ROS2 and high-fidelity simulators such as Gazebo, Isaac Lab, and/or Mujoco
Additional Desired Experience
- Prior experience with implementation and/or development of SLAM algorithms on robotic hardware, including acquisition, processing, and fusion of multimodal sensor data such as proprioceptive and exteroceptive sensors
-
OR0127: Internship - Deep Learning for Robotic Manipulation
MERL is looking for a highly motivated and qualified intern to work on deep learning methods for detection and pose estimation of objects using vision and tactile sensing, in manufacturing and assembly environments. This role involves developing, fine-tuning and deploying models on existing hardware. The method will be applied for robotic manipulation where the knowledge of accurate position and orientation of objects within the scene would allow the robot to interact with the objects. The ideal candidate would be a Ph.D. student familiar with the state-of-the-art methods for pose estimation and tracking of objects. The successful candidate will work closely with MERL researchers to develop and implement novel algorithms, conduct experiments, and publish research findings at a top-tier conference. Start date and expected duration of the internship is flexible. Interested candidates are encouraged to apply with their updated CV and list of relevant publications.
Required Specific Experience
- Prior experience in Computer Vision and Robotic Manipulation.
- Experience with ROS and deep learning frameworks such as PyTorch are essential.
- Strong programming skills in Python.
- Experience with simulation tools, such as PyBullet, Issac Lab, or MuJoCo.
-
CI0080: Internship - Efficient AI
We are on the lookout for passionate and skilled interns to join our cutting-edge research team focused on developing efficient machine learning techniques for sustainability. This is an exciting opportunity to make a real impact in the field of AI and environmental conservation, with the aim of publishing at leading AI research venues.
What We're Looking For:
- Advanced research experience in generative models and computationally efficient models
- Hands-on skills for large language models (LLM), vision language models (VLM), large multi-modal models (LMM), foundation models (FoMo)
- Deep understanding of state-of-the-art machine learning methods
- Proficiency in Python and PyTorch
- Familiarity with various deep learning frameworks
- Ph.D. candidates who have completed at least half of their program
Internship Details:
- Duration: approximately 3 months
- Flexible start dates available
- Objective: publish research results at leading AI research venues
If you are a highly motivated individual with a passion for applying AI to sustainability challenges, we want to hear from you! This internship offers a unique chance to work on meaningful projects at the intersection of machine learning and environmental sustainability.
See All Internships for Artificial Intelligence -
-
Openings
See All Openings at MERL -
Recent Publications
- "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning", International Conference on Learning Representations (ICLR), April 2025.BibTeX TR2025-051 PDF
- @inproceedings{Koike-Akino2025apr,
- author = {Koike-Akino, Toshiaki and Tonin,Francesco and Wu,Yongtao and Wu,Frank Zhengqing and Candogan,Leyla Naz and Cevher, Volkan},
- title = {{Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning}},
- booktitle = {International Conference on Learning Representations (ICLR)},
- year = 2025,
- month = apr,
- url = {https://www.merl.com/publications/TR2025-051}
- }
, - "Programmatic Video Prediction Using Large Language Models", International Conference on Learning Representations Workshops (ICLRW), April 2025.BibTeX TR2025-049 PDF
- @inproceedings{Tang2025apr,
- author = {Tang, Hao and Ellis, Kevin and Lohit, Suhas and Jones, Michael J. and Chatterjee, Moitreya},
- title = {{Programmatic Video Prediction Using Large Language Models}},
- booktitle = {International Conference on Learning Representations Workshops (ICLRW)},
- year = 2025,
- month = apr,
- url = {https://www.merl.com/publications/TR2025-049}
- }
, - "30+ Years of Source Separation Research: Achievements and Future Challenges", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2025.BibTeX TR2025-036 PDF
- @inproceedings{Araki2025mar,
- author = {Araki, Shoko and Ito, Nobutaka and Haeb-Umbach, Reinhold and Wichern, Gordon and Wang, Zhong-Qiu and Mitsufuji, Yuki},
- title = {{30+ Years of Source Separation Research: Achievements and Future Challenges}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-036}
- }
, - "No Class Left Behind: A Closer Look at Class Balancing for Audio Tagging", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2025.BibTeX TR2025-037 PDF
- @inproceedings{Ebbers2025mar,
- author = {Ebbers, Janek and Germain, François G and Wilkinghoff, Kevin and Wichern, Gordon and {Le Roux}, Jonathan},
- title = {{No Class Left Behind: A Closer Look at Class Balancing for Audio Tagging}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-037}
- }
, - "O-EENC-SD: Efficient Online End-to-End Neural Clustering for Speaker Diarization", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2025.BibTeX TR2025-031 PDF
- @inproceedings{Gruttadauria2025mar,
- author = {Gruttadauria, Elio and Fontaine, Mathieu and {Le Roux}, Jonathan and Essid, Slim},
- title = {{O-EENC-SD: Efficient Online End-to-End Neural Clustering for Speaker Diarization}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-031}
- }
, - "Interactive Robot Action Replanning using Multimodal LLM Trained from Human Demonstration Videos", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2025.BibTeX TR2025-034 PDF
- @inproceedings{Hori2025mar,
- author = {Hori, Chiori and Kambara, Motonari and Sugiura, Komei and Ota, Kei and Khurana, Sameer and Jain, Siddarth and Corcodel, Radu and Jha, Devesh K. and Romeres, Diego and {Le Roux}, Jonathan},
- title = {{Interactive Robot Action Replanning using Multimodal LLM Trained from Human Demonstration Videos}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-034}
- }
, - "Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2025.BibTeX TR2025-029 PDF Software
- @inproceedings{Masuyama2025mar,
- author = {Masuyama, Yoshiki and Wichern, Gordon and Germain, François G and Ick, Christopher and {Le Roux}, Jonathan},
- title = {{Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-029}
- }
, - "Leveraging Audio-Only Data for Text-Queried Target Sound Extraction", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), April 2025.BibTeX TR2025-033 PDF
- @inproceedings{Saijo2025mar2,
- author = {Saijo, Kohei and Ebbers, Janek and Germain, François G and Khurana, Sameer and Wichern, Gordon and {Le Roux}, Jonathan},
- title = {{Leveraging Audio-Only Data for Text-Queried Target Sound Extraction}},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2025,
- month = mar,
- url = {https://www.merl.com/publications/TR2025-033}
- }
,
- "Quantum-PEFT: Ultra Parameter-Efficient Fine-Tuning", International Conference on Learning Representations (ICLR), April 2025.
-
Videos
-
Software & Data Downloads
-
MEL-PETs Joint-Context Attack for LLM Privacy Challenge -
MEL-PETs Defense for LLM Privacy Challenge -
Learned Born Operator for Reflection Tomographic Imaging -
Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization -
Self-Monitored Inference-Time INtervention for Generative Music Transformers -
Transformer-based model with LOcal-modeling by COnvolution -
Sound Event Bounding Boxes -
Enhanced Reverberation as Supervision -
Gear Extensions of Neural Radiance Fields -
Long-Tailed Anomaly Detection Dataset -
Neural IIR Filter Field for HRTF Upsampling and Personalization -
Target-Speaker SEParation -
Pixel-Grounded Prototypical Part Networks -
Steered Diffusion -
Hyperbolic Audio Source Separation -
Simple Multimodal Algorithmic Reasoning Task Dataset -
Partial Group Convolutional Neural Networks -
SOurce-free Cross-modal KnowledgE Transfer -
Audio-Visual-Language Embodied Navigation in 3D Environments -
Nonparametric Score Estimators -
3D MOrphable STyleGAN -
Instance Segmentation GAN -
Audio Visual Scene-Graph Segmentor -
Generalized One-class Discriminative Subspaces -
Goal directed RL with Safety Constraints -
Hierarchical Musical Instrument Separation -
Generating Visual Dynamics from Sound and Context -
Adversarially-Contrastive Optimal Transport -
Online Feature Extractor Network -
MotionNet -
FoldingNet++ -
Quasi-Newton Trust Region Policy Optimization -
Landmarks’ Location, Uncertainty, and Visibility Likelihood -
Robust Iterative Data Estimation -
Gradient-based Nikaido-Isoda -
Discriminative Subspace Pooling
-