- Date & Time: Tuesday, February 28, 2023; 12:00 PM
Speaker: Prof. Kevin Lynch, Northwestern University
MERL Host: Diego Romeres
Research Areas: Machine Learning, Robotics
Abstract - Research at the Center for Robotics and Biosystems at Northwestern University includes bio-inspiration, neuromechanics, human-machine systems, and swarm robotics, among other topics. In this talk I will focus on our work on manipulation, including autonomous in-hand robotic manipulation and safe, intuitive human-collaborative manipulation among one or more humans and a team of mobile manipulators.
-
- Date & Time: Tuesday, February 14, 2023; 12:00 PM
Speaker: Stefanie Tellex, Brown University
MERL Host: Daniel N. Nikovski
Research Area: Robotics
Abstract - Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. Existing approaches use action-based representations that do not capture the goal-based meaning of a language expression and do not generalize to partially observed environments. The aim of my research program is to create autonomous robots that can understand complex goal-based commands and execute those commands in partially observed, dynamic environments. I will describe demonstrations of object-search in a POMDP setting with information about object locations provided by language, and mapping between English and Linear Temporal Logic, enabling a robot to understand complex natural language commands in city-scale environments. These advances represent steps towards robots that interpret complex natural language commands in partially observed environments using a decision theoretic framework.
-
- Date & Time: Tuesday, January 31, 2023; 11:00 AM
Speaker: Rupert way, University of Oxford
MERL Host: Ye Wang Abstract - Rapidly decarbonising the global energy system is critical for addressing climate change, but concerns about costs have been a barrier to implementation. Historically, most energy-economy models have overestimated the future costs of renewable energy technologies and underestimated their deployment, thereby overestimating total energy transition costs. These issues have driven calls for alternative approaches and more reliable technology forecasting methods. We use an approach based on probabilistic cost forecasting methods to estimate future energy system costs in a variety of scenarios. Our findings suggest that, compared to continuing with a fossil fuel-based system, a rapid green energy transition will likely result in net savings of many trillions of dollars - even without accounting for climate damages or co-benefits of climate policy.
-
- Date & Time: Tuesday, December 20, 2022; 1:00 PM
Speaker: William M. Sisson, WBCSD North America
MERL Host: Scott A. Bortoff Abstract - Sustainability today encompasses three interconnected imperatives that all businesses must face and help to address: the increasing impact of climate change, the degradation of natural systems, and the growth of inequality. Business leaders today are increasingly understanding, particularly with the engagement of capital markets, that investors, consumers, and other business stakeholders are setting expectations on how companies are responding to these challenges and preparing for their business impact. More and more companies have shifted from sustainability as a single function in the company to one the is integrated across the firm. This translates directly into how companies are rethinking their product design and innovation efforts for sustainability and the technologies they will require. Some product categories, like heating and air conditioning systems for buildings, are both a part of the problem as well as potentially offering real solutions.
-
- Date & Time: Tuesday, November 29, 2022; 1:00 PM
Speaker: Mathew Hampshire-Waugh, Net-Zero Consulting Services LTD
MERL Host: Ye Wang Abstract - A seminar based upon the Author’s bestselling book, CLIMATE CHANGE and the road to NET-ZERO. The session shall explore how humanity has broken free from the shackles of poverty, suffering, and war and for the first time in human history grown both population and prosperity. It will also delve into how a single species has reconfigured the natural world, repurposed the Earth’s resources, and begun to re-engineer the climate.
Using these conflicting narratives, the talk will explore the science, economics, technology, and politics of climate change. Constructing an argument that demonstrates, under many energy transition pathways, solving global warming requires no trade-off between the economy and environment, present and future generations, or rich and poor. Ultimately concluding that a twenty-year transition to a zero-carbon system provides a win-win solution for all on planet Earth.
-
- Date & Time: Tuesday, November 1, 2022; 1:00 PM
Speaker: Jiajun Wu, Stanford University
MERL Host: Anoop Cherian
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Abstract - The visual world has its inherent structure: scenes are made of multiple identical objects; different objects may have the same color or material, with a regular layout; each object can be symmetric and have repetitive parts. How can we infer, represent, and use such structure from raw data, without hampering the expressiveness of neural networks? In this talk, I will demonstrate that such structure, or code, can be learned from natural supervision. Here, natural supervision can be from pixels, where neuro-symbolic methods automatically discover repetitive parts and objects for scene synthesis. It can also be from objects, where humans during fabrication introduce priors that can be leveraged by machines to infer regular intrinsics such as texture and material. When solving these problems, structured representations and neural nets play complementary roles: it is more data-efficient to learn with structured representations, and they generalize better to new scenarios with robustly captured high-level information; neural nets effectively extract complex, low-level features from cluttered and noisy visual data.
-
- Date & Time: Wednesday, October 26, 2022; 1:00 PM
Speaker: Ufuk Topcu, The University of Texas at Austin
MERL Host: Abraham P. Vinod
Research Areas: Control, Dynamical Systems, Optimization
Abstract - Autonomous systems are emerging as a driving technology for countlessly many applications. Numerous disciplines tackle the challenges toward making these systems trustworthy, adaptable, user-friendly, and economical. On the other hand, the existing disciplinary boundaries delay and possibly even obstruct progress. I argue that the nonconventional problems that arise in designing and verifying autonomous systems require hybrid solutions in the intersection of learning, formal methods, and controls. I will present examples of such hybrid solutions in the context of learning in sequential decision-making processes. These results offer novel means for effectively integrating physics-based, contextual, or structural prior knowledge into data-driven learning algorithms. They improve data efficiency by several orders of magnitude and generalizability to environments and tasks that the system had not experienced previously.
-
- Date & Time: Friday, October 14, 2022; 11:00 AM
Speaker: Gianmario Pellegrino, Politecnico di Tornio, Italy
Research Areas: Electric Systems, Electronic and Photonic Devices, Multi-Physical Modeling, Optimization
Abstract - This seminar presents a comprehensive design and simulation procedure for Permanent Magnet Synchronous Machines (PMSMs) for traction application. The design of heavily saturated traction PMSMs is a multidisciplinary engineering challenge that CAD software suites struggle to grasp, whereas design equations are way too approximated for the purpose. This tutorial will present the design toolchain of SyR-e, where magnetic and structural design equations are fast-FEA corrected for an insightful initial design, later FEA calibrated with free or commercial FEA tools. One e-motor will be designed from zero referring to the specs and size of the Tesla Model 3 rear-axle e-motor. The circuital model of one motor with inverter and discrete-time control will be automatically generated, in Simulink and PLECS, with accessible torque control source code, for simulation of healthy and faulty conditions, ready for real-time implementation (e.g. HiL).
-
- Date & Time: Thursday, October 13, 2022; 1:30pm-2:30pm
Speaker: Prof. Shaoshuai Mou, Purdue University
MERL Host: Yebin Wang
Research Areas: Control, Machine Learning, Optimization
Abstract - Modern society has been relying more and more on engineering advance of autonomous systems, ranging from individual systems (such as a robotic arm for manufacturing, a self-driving car, or an autonomous vehicle for planetary exploration) to cooperative systems (such as a human-robot team, swarms of drones, etc). In this talk we will present our most recent progress in developing a fundamental framework for learning and control in autonomous systems. The framework comes from a differentiation of Pontryagin’s Maximum Principle and is able to provide a unified solution to three classes of learning/control tasks, i.e. adaptive autonomy, inverse optimization, and system identification. We will also present applications of this framework into human-autonomy teaming, especially in enabling an autonomous system to take guidance from human operators, which is usually sparse and vague.
-
- Date & Time: Tuesday, September 6, 2022; 12:00 PM EDT
Speaker: Chuang Gan, UMass Amherst & MIT-IBM Watson AI Lab
MERL Host: Jonathan Le Roux
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
Abstract - Human sensory perception of the physical world is rich and multimodal and can flexibly integrate input from all five sensory modalities -- vision, touch, smell, hearing, and taste. However, in AI, attention has primarily focused on visual perception. In this talk, I will introduce my efforts in connecting vision with sound, which will allow machine perception systems to see objects and infer physics from multi-sensory data. In the first part of my talk, I will introduce a. self-supervised approach that could learn to parse images and separate the sound sources by watching and listening to unlabeled videos without requiring additional manual supervision. In the second part of my talk, I will show we may further infer the underlying causal structure in 3D environments through visual and auditory observations. This enables agents to seek the sound source of repeating environmental sound (e.g., alarm) or identify what object has fallen, and where, from an intermittent impact sound.
-
- Date & Time: Tuesday, May 3, 2022; 1:00 PM
Speaker: Michael Posa, University of Pennsylvania
MERL Host: Devesh K. Jha
Research Areas: Control, Optimization, Robotics
Abstract - Machine learning has shown incredible promise in robotics, with some notable recent demonstrations in manipulation and sim2real transfer. These results, however, require either an accurate a priori model (for simulation) or a large amount of data. In contrast, my lab is focused on enabling robots to enter novel environments and then, with minimal time to gather information, accomplish complex tasks. In this talk, I will argue that the hybrid or contact-driven nature of real-world robotics, where a robot must safely and quickly interact with objects, drives this high data requirement. In particular, the inductive biases inherent in standard learning methods fundamentally clash with the non-differentiable physics of contact-rich robotics. Focusing on model learning, or system identification, I will show both empirical and theoretical results which demonstrate that contact stiffness leads to poor training and generalization, leading to some healthy skepticism of simulation experiments trained on artificially soft environments. Fortunately, implicit learning formulations, which embed convex optimization problems, can dramatically reshape the optimization landscape for these stiff problems. By carefully reasoning about the roles of stiffness and discontinuity, and integrating non-smooth structures, we demonstrate dramatically improved learning performance. Within this family of approaches, ContactNets accurately identifies the geometry and dynamics of a six-sided cube bouncing, sliding, and rolling across a surface from only a handful of sample trajectories. Similarly, a piecewise-affine hybrid system with thousands of modes can be identified purely from state transitions. Time permitting, I'll discuss how these learned models can be deployed for control via recent results in real-time, multi-contact MPC.
-
- Date & Time: Tuesday, April 12, 2022; 11:00 AM EDT
Speaker: Sebastien Gros, NTNU
Research Areas: Control, Dynamical Systems, Optimization
Abstract - Reinforcement Learning (RL), similarly to many AI-based techniques, is currently receiving a very high attention. RL is most commonly supported by classic Machine Learning techniques, i.e. typically Deep Neural Networks (DNNs). While there are good motivations for using DNNs in RL, there are also significant drawbacks. The lack of “explainability” of the resulting control policies, and the difficulty to provide guarantees on their closed-loop behavior (safety, stability) makes DNN-based policies problematic in many applications. In this talk, we will discuss an alternative approach to support RL, via formal optimal control tools based on Model Predictive Control (MPC). This approach alleviates the issues detailed above, but also presents some challenges. In this talk, we will discuss why MPC is a valid tool to support RL, and how MPC can be combined with RL (RLMPC). We will then discuss some recent results regarding this combination, the known challenges, and the kind of control applications where we believe that RLMPC will be a valuable approach.
-
- Date & Time: Tuesday, April 5, 2022; 11:00 AM EDT
Speaker: Albert Benveniste, Benoît Caillaud, and Mathias Malandain, Inria
MERL Host: Scott A. Bortoff
Research Areas: Dynamical Systems, Multi-Physical Modeling
Abstract - Since its 3.3 release, Modelica offers the possibility to specify models of dynamical systems with multiple modes having different DAE-based dynamics. However, the handling of such models by the current Modelica tools is not satisfactory, with mathematically sound models yielding exceptions at runtime. In our introduction, will briefly explain why and when the approximate structural analysis implemented in current Modelica tools leads to such errors. Then we will present our multimode Pryce Sigma-method for index reduction, in which the mode-dependent Sigma-matrix is represented in a dual form, by attaching, to every valuation of the sigma_ij entry of the Sigma matrix, the predicate characterizing the set of modes in which sigma_ij takes this value. We will illustrate this multimode analysis on example, by using our IsamDAE tool. In a second part, we will complement this multimode DAE structural analysis by a new structural analysis of mode changes (and, more generally, transient modes holding for zero time). Also, mode changes often give raise to impulsive behaviors: we will present a compile-time analysis identifying such behaviors. Our structural analysis of mode changes deeply relies on nonstandard analysis, which is a mathematical framework in which infinitesimals and infinities are first class citizens.
-
- Date & Time: Wednesday, March 30, 2022; 11:00 AM EDT
Speaker: Vincent Sitzmann, MIT
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Abstract - Given only a single picture, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand, navigate, and interact with their surroundings. This poses a unique set of challenges that sets neural scene representations apart from conventional representations of 3D scenes: Rendering and processing operations need to be differentiable, and the type of information they encode is unknown a priori, requiring them to be extraordinarily flexible. At the same time, training them without ground-truth 3D supervision is an underdetermined problem, highlighting the need for structure and inductive biases without which models converge to spurious explanations.
I will demonstrate how we can equip neural networks with inductive biases that enables them to learn 3D geometry, appearance, and even semantic information, self-supervised only from posed images. I will show how this approach unlocks the learning of priors, enabling 3D reconstruction from only a single posed 2D image, and how we may extend these representations to other modalities such as sound. I will then discuss recent work on learning the neural rendering operator to make rendering and training fast, and how this speed-up enables us to learn object-centric neural scene representations, learning to decompose 3D scenes into objects, given only images. Finally, I will talk about a recent application of self-supervised scene representation learning in robotic manipulation, where it enables us to learn to manipulate classes of objects in unseen poses from only a handful of human demonstrations.
-
- Date & Time: Tuesday, March 15, 2022; 1:00 PM EDT
Speaker: Arjuna Madanayake, Florida International University
Research Areas: Applied Physics, Electronic and Photonic Devices, Multi-Physical Modeling
Abstract - Analog computers are making a comeback. In fact, they are taking the world by storm. After decades of “analog computing winter” that followed the invention of the digital computing paradigm in the 1940s, classical physics-based analog computers are being reconsidered for improving the computational throughput of demanding applications. The research is driven by exponential growth in transistor densities and bandwidths in the integrated circuits world, which in turn, has led to new possibilities for the creative circuit designer. Fast analog chips not only furnish communication/radar front-ends, but can also be used to accelerate mathematical operations. Most analog computer today focus on AI and machine learning. E.g., analog in-memory computing plays an exciting role in AI acceleration because linear algebra operations can be mapped efficiently to compute in memory. However, many scientific computing tasks are built on linear and non-linear partial differential equations (PDEs) that require recursive numerical PDE solution across spatial and temporal dimensions. The adoption of analog parallel processors that are built around speed vs power efficiency vs precision trade-offs available from circuitry for PDE solution require new research in computer architecture. We report on recent progress on CMOS based analog computers for solving computational electromagnetics and non-linear pressure wave equations. Our first analog computing chip was measured to be more than 400x faster than a top-of-the-line NVIDIA GPU while consuming 1000x less power for elementary computational electromagnetics computations using finite-difference time-domain scheme.
-
- Date & Time: Tuesday, March 1, 2022; 1:00 PM EST
Speaker: David Harwath, The University of Texas at Austin
MERL Host: Chiori Hori
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Abstract - Humans learn spoken language and visual perception at an early age by being immersed in the world around them. Why can't computers do the same? In this talk, I will describe our ongoing work to develop methodologies for grounding continuous speech signals at the raw waveform level to natural image scenes. I will first present self-supervised models capable of discovering discrete, hierarchical structure (words and sub-word units) in the speech signal. Instead of conventional annotations, these models learn from correspondences between speech sounds and visual patterns such as objects and textures. Next, I will demonstrate how these discrete units can be used as a drop-in replacement for text transcriptions in an image captioning system, enabling us to directly synthesize spoken descriptions of images without the need for text as an intermediate representation. Finally, I will describe our latest work on Transformer-based models of visually-grounded speech. These models significantly outperform the prior state of the art on semantic speech-to-image retrieval tasks, and also learn representations that are useful for a multitude of other speech processing tasks.
-
- Date & Time: Tuesday, February 15, 2022; 1:00 PM EST
Speaker: Katie Bouman, California Institute of Technology
MERL Host: Joshua Rapp
Research Area: Computational Sensing
Abstract - As imaging requirements become more demanding, we must rely on increasingly sparse and/or noisy measurements that fail to paint a complete picture. Computational imaging pipelines, which replace optics with computation, have enabled image formation in situations that are impossible for conventional optical imaging. For instance, the first black hole image, published in 2019, was only made possible through the development of computational imaging pipelines that worked alongside an Earth-sized distributed telescope. However, remaining scientific questions motivate us to improve this computational telescope to see black hole phenomena still invisible to us and to meaningfully interpret the collected data. This talk will discuss how we are leveraging and building upon recent advances in machine learning in order to achieve more efficient uncertainty quantification of reconstructed images as well as to develop techniques that allow us to extract the evolving structure of our own Milky Way's black hole over the course of a night, perhaps even in three dimensions.
-
- Date & Time: Tuesday, February 8, 2022; 1:00 PM EST
Speaker: Raphaël Pestourie, MIT
MERL Host: Matthew Brand
Research Areas: Applied Physics, Electronic and Photonic Devices, Optimization
Abstract - Thin large-area structures with aperiodic subwavelength patterns can unleash the full power of Maxwell’s equations for focusing light and a variety of other wave transformation or optical applications. Because of their irregularity and large scale, capturing the full scattering through these devices is one of the most challenging tasks for computational design: enter extreme optics! This talk will present ways to harness the full computational power of modern large-scale optimization in order to design optical devices with thousands or millions of free parameters. We exploit various methods of domain-decomposition approximations, supercomputer-scale topology optimization, laptop-scale “surrogate” models based on Chebyshev interpolation and/or new scientific machine learning models, and other techniques to attack challenging problems: achromatic lenses that simultaneously handle many wavelengths and angles, “deep” images, hyperspectral imaging, and more.
-
- Date & Time: Tuesday, December 14, 2021; 1:00 PM EST
Speaker: Prof. Chris Fletcher, University of Waterloo
MERL Host: Ankush Chakrabarty
Research Areas: Dynamical Systems, Machine Learning, Multi-Physical Modeling
Abstract - Decision-making and adaptation to climate change requires quantitative projections of the physical climate system and an accurate understanding of the uncertainty in those projections. Earth system models (ESMs), which solve the Navier-Stokes equations on the sphere, are the only tool that climate scientists have to make projections forward into climate states that have not been observed in the historical data record. Yet, ESMs are incredibly complex and expensive codes and contain many poorly constrained physical parameters—for processes such as clouds and convection—that must be calibrated against observations. In this talk, I will describe research from my group that uses ensembles of ESM simulations to train statistical models that learn the behavior and sensitivities of the ESM. Once trained and validated the statistical models are essentially free to run, which allows climate modelling centers to make more efficient use of precious compute cycles. The aim is to improve the quality of future climate projections, by producing better calibrated ESMs, and to improve the quantification of the uncertainties, by better sampling the equifinality of climate states.
-
- Date & Time: Tuesday, December 7, 2021; 1:00 PM EST
Speaker: Prof. Eric Severson, University of Wisconsin-Madison
MERL Host: Bingnan Wang
Research Area: Electric Systems
Abstract - Electric motors pump our water, heat and cool our homes and offices, drive critical medical and surgical equipment, and, increasingly, operate our transportation systems. Approximately 99% of the world’s electric energy is produced by a rotating generator and 45% of that energy is consumed by an electric motor. The efficiency of this technology is vital in enabling our energy sustainability and reducing our carbon footprint. The reliability and lifetime of this technology have severe, and sometimes life-altering, consequences. Today’s motor technology largely relies upon mechanical bearings to support the motor’s shaft. These bearings are the first components to fail, create frictional losses, and rely on lubricants that create contamination challenges and require periodic maintenance. In short, bearings are the Achilles' heel of modern electric motors.
This seminar will explore the use of actively controlled magnetic forces to levitate the motor shaft, eliminating mechanical bearings and the problems associated with them. The working principles of traditional magnetic levitation technology (active magnetic bearings) will be reviewed and used to explain why this technology has not been successfully applied to the most high-impact motor applications. Research into “bearingless” motors offers a new levitation approach by manipulating the inherent magnetic force capability of all electric motors. While traditional motors are carefully designed to prevent shaft forces, the bearingless motor concept controls these forces to make the motor simultaneously function as an active magnetic bearing. The seminar will showcase the potential of bearingless technology to revolutionize motor systems of critical importance for energy and sustainability—from industrial compressors and blowers, such as those found in HVAC systems and wastewater aeration equipment, to power grid flywheel energy storage devices and electric turbochargers in fuel-efficient vehicles.
-
- Date & Time: Tuesday, November 16, 2021; 11:00 AM EST
Speaker: Thomas Schön, Uppsala University
Research Areas: Dynamical Systems, Machine Learning
Abstract - While deep learning-based classification is generally addressed using standardized approaches, this is really not the case when it comes to the study of regression problems. There are currently several different approaches used for regression and there is still room for innovation. We have developed a general deep regression method with a clear probabilistic interpretation. The basic building block in our construction is an energy-based model of the conditional output density p(y|x), where we use a deep neural network to predict the un-normalized density from input-output pairs (x, y). Such a construction is also commonly referred to as an implicit representation. The resulting learning problem is challenging and we offer some insights on how to deal with it. We show good performance on several computer vision regression tasks, system identification problems and 3D object detection using laser data.
-
- Date & Time: Tuesday, November 9, 2021; 1:00 PM EST
Speaker: Prof. Marco Di Renzo, CNRS & Paris-Saclay University
Research Areas: Communications, Electronic and Photonic Devices, Signal Processing
Abstract - A Reconfigurable Intelligent Surface (RIS) is a planar structure that is engineered to have properties that enable the dynamic control of the electromagnetic waves. In wireless communications and networks, RISs are an emerging technology for realizing programmable and reconfigurable wireless propagation environments through nearly passive and tunable signal transformations. RIS-assisted programmable wireless environments are a multidisciplinary research endeavor. This presentation is aimed to report the latest research advances on modeling, analyzing, and optimizing RISs for wireless communications with focus on electromagnetically consistent models, analytical frameworks, and optimization algorithms.
-
- Date & Time: Tuesday, November 2, 2021; 1:00 PM EST
Speaker: Dr. Hsiao-Yu (Fish) Tung, MIT BCS
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Robotics
Abstract - Current state-of-the-art CNNs can localize and name objects in internet photos, yet, they miss the basic knowledge that a two-year-old toddler has possessed: objects persist over time despite changes in the observer’s viewpoint or during cross-object occlusions; objects have 3D extent; solid objects do not pass through each other. In this talk, I will introduce neural architectures that learn to parse video streams of a static scene into world-centric 3D feature maps by disentangling camera motion from scene appearance. I will show the proposed architectures learn object permanence, can imagine RGB views from novel viewpoints in truly novel scenes, can conduct basic spatial reasoning and planning, can infer affordability in sentences, and can learn geometry-aware 3D concepts that allow pose-aware object recognition to happen with weak/sparse labels. Our experiments suggest that the proposed architectures are essential for the models to generalize across objects and locations, and it overcomes many limitations of 2D CNNs. I will show how we can use the proposed 3D representations to build machine perception and physical understanding more close to humans.
-
- Date & Time: Tuesday, October 12, 2021; 1:00 PM EST
Speaker: Prof. Greg Ongie, Marquette University
MERL Host: Hassan Mansour
Research Areas: Computational Sensing, Machine Learning, Signal Processing
Abstract - Deep learning is emerging as powerful tool to solve challenging inverse problems in computational imaging, including basic image restoration tasks like denoising and deblurring, as well as image reconstruction problems in medical imaging. This talk will give an overview of the state-of-the-art supervised learning techniques in this area and discuss two recent innovations: deep equilibrium architectures, which allows one to train an effectively infinite-depth reconstruction network; and model adaptation methods, that allow one to adapt a pre-trained reconstruction network to changes in the imaging forward model at test time.
-
- Date & Time: Tuesday, September 28, 2021; 1:00 PM EST
Speaker: Dr. Ruohan Gao, Stanford University
MERL Host: Gordon Wichern
Research Areas: Computer Vision, Machine Learning, Speech & Audio
Abstract - While computer vision has made significant progress by "looking" — detecting objects, actions, or people based on their appearance — it often does not listen. Yet cognitive science tells us that perception develops by making use of all our senses without intensive supervision. Towards this goal, in this talk I will present my research on audio-visual learning — We disentangle object sounds from unlabeled video, use audio as an efficient preview for action recognition in untrimmed video, decode the monaural soundtrack into its binaural counterpart by injecting visual spatial information, and use echoes to interact with the environment for spatial image representation learning. Together, these are steps towards multimodal understanding of the visual world, where audio serves as both the semantic and spatial signals. In the end, I will also briefly talk about our latest work on multisensory learning for robotics.
-