News & Events

1,247 News items, Awards, Events and Talks related to MERL and its staff.

  •  AWARD   2017 Graph Challenge Student Innovation Award
    Date: August 4, 2017
    Awarded to: David Zhuzhunashvili and Andrew Knyazev
    Research Area: Machine Learning
    • David Zhuzhunashvili, an undergraduate student at UC Boulder, Colorado, and Andrew Knyazev, Distinguished Research Scientist at MERL, received the 2017 Graph Challenge Student Innovation Award. Their poster "Preconditioned Spectral Clustering for Stochastic Block Partition Streaming Graph Challenge" was accepted to the 2017 IEEE High Performance Extreme Computing Conference (HPEC '17), taking place 12-14 September 2017 (, and the paper was accepted to the IEEE Xplore HPEC proceedings.

      HPEC is the premier conference in the world on the convergence of High Performance and Embedded Computing. DARPA/Amazon/IEEE Graph Challenge is a special HPEC event. Graph Challenge encourages community approaches to developing new solutions for analyzing graphs derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. The 2017 Streaming Graph Challenge is Stochastic Block Partition. This challenge seeks to identify optimal blocks (or clusters) in a larger graph with known ground-truth clusters, while performance is evaluated compared to baseline Python and C codes, provided by the Graph Challenge organizers.

      The proposed approach is spectral clustering that performs block partition of graphs using eigenvectors of a matrix representing the graph. Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method iteratively approximates a few leading eigenvectors of the symmetric graph Laplacian for multi-way graph partitioning. Preliminary tests for all static cases for the Graph Challenge demonstrate 100% correctness of partition using any of the IEEE HPEC Graph Challenge metrics, while at the same time also being approximately 500-1000 times faster compared to the provided baseline code, e.g., 2M static graph is 100% correctly partitioned in ~2,100 sec. Warm-starts of LOBPCG further cut the execution time 2-3x for the streaming graphs.
  •  NEWS   MERL researchers presented 11 papers at ACC 2017 (American Controls Conference)
    Date: May 24, 2017 - May 26, 2017
    MERL Contacts: Mouhacine Benosman; Daniel Burns; Claus Danielson; Stefano Di Cairano; Abraham Goldsmith; Uroš Kalabić; Saleh Nabi; Daniel Nikovski; Arvind Raghunathan; Yebin Wang
    Research Areas: Control, Dynamical Systems, Machine Learning
    • Talks were presented by members of several groups at MERL and covered a wide range of topics:
      - Similarity-Based Vehicle-Motion Prediction
      - Transfer Operator Based Approach for Optimal Stabilization of Stochastic Systems
      - Extended command governors for constraint enforcement in dual stage processing machines
      - Cooperative Optimal Output Regulation of Multi-Agent Systems Using Adaptive Dynamic Programming
      - Deep Reinforcement Learning for Partial Differential Equation Control
      - Indirect Adaptive MPC for Output Tracking of Uncertain Linear Polytopic Systems
      - Constraint Satisfaction for Switched Linear Systems with Restricted Dwell-Time
      - Path Planning and Integrated Collision Avoidance for Autonomous Vehicles
      - Least Squares Dynamics in Newton-Krylov Model Predictive Control
      - A Neuro-Adaptive Architecture for Extremum Seeking Control Using Hybrid Learning Dynamics
      - Robust POD Model Stabilization for the 3D Boussinesq Equations Based on Lyapunov Theory and Extremum Seeking
  •  NEWS   MERL's breakthrough speech separation technology featured in Mitsubishi Electric Corporation's Annual R&D Open House
    Date: May 24, 2017
    Where: Tokyo, Japan
    MERL Contacts: Bret Harsham; Jonathan Le Roux
    Research Areas: Speech & Audio, Artificial Intelligence
    • Mitsubishi Electric Corporation announced that it has created the world's first technology that separates in real time the simultaneous speech of multiple unknown speakers recorded with a single microphone. It's a key step towards building machines that can interact in noisy environments, in the same way that humans can have meaningful conversations in the presence of many other conversations. In tests, the simultaneous speeches of two and three people were separated with up to 90 and 80 percent accuracy, respectively. The novel technology, which was realized with Mitsubishi Electric's proprietary "Deep Clustering" method based on artificial intelligence (AI), is expected to contribute to more intelligible voice communications and more accurate automatic speech recognition. A characteristic feature of this approach is its versatility, in the sense that voices can be separated regardless of their language or the gender of the speakers. A live speech separation demonstration that took place on May 24 in Tokyo, Japan, was widely covered by the Japanese media, with reports by three of the main Japanese TV stations and multiple articles in print and online newspapers. The technology is based on recent research by MERL's Speech and Audio team.
      Mitsubishi Electric Corporation Press Release
      MERL Deep Clustering Demo

      Media Coverage:

      Fuji TV, News, "Minna no Mirai" (Japanese)
      The Nikkei (Japanese)
      Nikkei Technology Online (Japanese)
      Sankei Biz (Japanese)
      EE Times Japan (Japanese)
      ITpro (Japanese)
      Nikkan Sports (Japanese)
      Nikkan Kogyo Shimbun (Japanese)
      Dempa Shimbun (Japanese)
      Il Sole 24 Ore (Italian)
      IEEE Spectrum (English)
  •  NEWS   MERL organizes Workshop on Advanced Digital Transmitters at 2017 International Microwave Symposium
    Date: June 5, 2017
    Where: Honolulu, HI
    MERL Contacts: Rui Ma; Philip Orlik; Koon Hoo Teo
    Research Areas: Communications, Electronic and Photonic Devices, Signal Processing
    • MERL researcher Dr. Rui Ma, is organizing a Workshop in collaboration with Dr. SungWon Chung of the University of Southern California (USC) on advanced digital transmitters. This workshop overviews recent advances in digital-intensive wireless transmitter R&D for both base-stations and mobile devices. The focus will be on the digital signal processing techniques and related digital-intensive transmitter circuits and architectures for advanced modulation, linearization, spur cancellation, high efficiency encoding, and parallel processing. This workshop takes place on Monday, June 5th 2017 at International Microwave Week, in Honolulu, HI. In total, 8 technical presentations from world leading research groups will be given.

      Dr. Ma will present a talk titled, "Advanced Power Encoding and Non-Contiguous Multi-Band Digital Transmitter Architectures"
  •  EVENT   Tim Marks to give lunch talk at Face and Gesture 2017 conference
    Date: Thursday, June 1, 2017
    Speaker: Tim K. Marks
    MERL Contact: Tim Marks
    Location: IEEE Conference on Automatic Face and Gesture Recognition (FG 2017), Washington, DC
    Research Areas: Machine Learning, Artificial Intelligence, Computer Vision
    • MERL Senior Principal Research Scientist Tim K. Marks will give the invited lunch talk on Thursday, June 1, at the IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017). The talk is entitled "Robust Real-Time 3D Head Pose and 2D Face Alignment."
  •  NEWS   MERL researchers will present 5 papers at ICC2017 wireless communications conference
    Date: May 21, 2017 - May 25, 2017
    Where: IEEE International Conference on Communications (ICC)
    MERL Contacts: Kyeong Jin (K.J.) Kim; Toshiaki Koike-Akino; Philip Orlik; Milutin Pajovic; Pu (Perry) Wang; Ye Wang
    Research Areas: Communications, Signal Processing
    • Five papers from the Wireless Comms team will be presented at ICC2017 to be held in Paris from 21-25 May 2017. The papers relate to channel estimation and adaptive transmission for mmWave, noncoherent MIMO, error correction coding, and video transmission.
  •  EVENT   Society for Industrial and Applied Mathematics panel for students on careers in industry
    Date & Time: Monday, July 10, 2017; 6:15 PM - 7:15 PM
    Speaker: Andrew Knyazev and other panelists, MERL
    MERL Contact: Joseph Katz
    Location: David Lawrence Convention Center, Pittsburgh PA
    • Andrew Knyazev accepted an invitation to represent MERL at the panel on Student Careers in Business, Industry and Government at the annual meeting of the Society for Industrial and Applied Mathematics (SIAM).

      The format consists of a five minute introduction by each of the panelists covering their background and an overview of the mathematical and computational challenges at their organization. The introductions will be followed by questions from the students.
  •  NEWS   MERL Researcher Tim Marks presents an invited talk at MIT Lincoln Laboratory
    Date: April 27, 2017
    Where: Lincoln Laboratory, Massachusetts Institute of Technology
    MERL Contact: Tim Marks
    Research Areas: Machine Learning, Artificial Intelligence, Computer Vision
    • MERL researcher Tim K. Marks presented an invited talk as part of the MIT Lincoln Laboratory CORE Seminar Series on Biometrics. The talk was entitled "Robust Real-Time 2D Face Alignment and 3D Head Pose Estimation."

      Abstract: Head pose estimation and facial landmark localization are key technologies, with widespread application areas including biometrics and human-computer interfaces. This talk describes two different robust real-time face-processing methods, each using a different modality of input image. The first part of the talk describes our system for 3D head pose estimation and facial landmark localization using a commodity depth sensor. The method is based on a novel 3D Triangular Surface Patch (TSP) descriptor, which is viewpoint-invariant as well as robust to noise and to variations in the data resolution. This descriptor, combined with fast nearest-neighbor lookup and a joint voting scheme, enable our system to handle arbitrary head pose and significant occlusions. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Both our 3D head pose and 2D face alignment methods outperform the previous results on standard datasets. If permitted, I plan to end the talk with a live demonstration.
  •  NEWS   MERL researcher Tim Marks presents invited talk at University of Utah
    Date: April 10, 2017
    Where: University of Utah School of Computing
    MERL Contact: Tim Marks
    Research Areas: Machine Learning, Artificial Intelligence, Computer Vision
    • MERL researcher Tim K. Marks presented an invited talk at the University of Utah School of Computing, entitled "Action Detection from Video and Robust Real-Time 2D Face Alignment."

      Abstract: The first part of the talk describes our multi-stream bi-directional recurrent neural network for action detection from video. In addition to a two-stream convolutional neural network (CNN) on full-frame appearance (images) and motion (optical flow), our system trains two additional streams on appearance and motion that have been cropped to a bounding box from a person tracker. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. Our method outperforms the previous state of the art on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we have made available to the community. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Our face alignment system outperforms the previous results on standard datasets. The talk will end with a live demo of our face alignment system.
  •  EVENT   MERL to participate in Xconomy Forum on AI & Robotics
    Date & Time: Tuesday, March 28, 2017; 1:30 - 5:30PM
    MERL Contacts: Joseph Katz; Daniel Nikovski; Alan Sullivan; Jay Thornton; Anthony Vetro; Richard (Dick) Waters; Jinyun Zhang
    Location: Google (355 Main St., 5th Floor, Cambridge MA)
    • How will AI and robotics reshape the economy and create new opportunities (and challenges) across industries? Who are the hottest companies that will compete with the likes of Google, Amazon, and Uber to create the future? And what are New England innovators doing to strengthen the local cluster and help lead the national discussion?

      MERL will be participating in Xconomy's third annual conference on AI and robotics in Boston to address these questions. MERL President & CEO, Dick Waters, will be on a panel discussing the status and future of self-driving vehicles. Lab members will also be on hand demonstrate and discuss recent advances AI and robotics technology.

      The agenda and registration for the event can be found online:
  •  NEWS   MERL researchers will present 5 papers at OFC2017 optical communications conference
    Date: March 19, 2017 - March 23, 2017
    Where: Optical Fiber Communication Conference and Exhibition (OFC)
    MERL Contacts: Toshiaki Koike-Akino; Keisuke Kojima; David Millar; Milutin Pajovic; Kieran Parsons
    Research Areas: Communications, Electronic and Photonic Devices, Signal Processing
    • Five papers from the Optical Comms team will be presented at OFC2017 to be held in Los Angeles from 19-23 March 2017. The papers relate to 1Tb/s optical transmission, high performance modulation formats and error correction coding for coherent optical links and precoding for plastic optical fiber links.
  •  EVENT   MERL hosts Boston Imaging and Vision Meetup
    Date & Time: Tuesday, January 17, 2017; 6:00 pm
    Speaker: Tim Marks, Esra Cansizoglu and Carl Vondrick, MERL and MIT
    MERL Contact: Alan Sullivan
    Location: 201 Broadway, Cambridge, MA
    Research Area: Computer Vision
    • MERL was pleased to host the Boston Imaging and Vision Meetup held on January 17. The meetup is an informal gathering of people interested in the field of computer imaging and vision. According to the group's website "the meetup provides an opportunity for the image processing/computer vision community to network, socialize and learn". The event held at MERL featured three speakers, Tim Marks and Esra Cansizoglu from MERL, as well as Carl Vondrick, an MIT CS graduate student in the group of Prof. Antonio Torralba. Roughly 70 people attended to eat pizza, hear the speakers and network.
  •  TALK   Generative Model-Based Text-to-Speech Synthesis
    Date & Time: Wednesday, February 1, 2017; 12:00-13:00
    Speaker: Dr. Heiga ZEN, Google
    MERL Host: Chiori Hori
    Research Area: Speech & Audio
    • Recent progress in generative modeling has improved the naturalness of synthesized speech significantly. In this talk I will summarize these generative model-based approaches for speech synthesis such as WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems.
      See for further details.
  •  NEWS   MERL to present 10 papers at ICASSP 2017
    Date: March 5, 2017 - March 9, 2017
    Where: New Orleans
    MERL Contacts: Petros Boufounos; Takaaki Hori; Jonathan Le Roux; Dehong Liu; Hassan Mansour; Anthony Vetro; Ye Wang
    Research Areas: Computer Vision, Computational Sensing, Digital Video, Information Security, Speech & Audio, Artificial Intelligence
    • MERL researchers will presented 10 papers at the upcoming IEEE International Conference on Acoustics, Speech & Signal Processing (ICASSP), to be held in New Orleans from March 5-9, 2017. Topics to be presented include recent advances in speech recognition and audio processing; graph signal processing; computational imaging; and privacy-preserving data analysis.

      ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 2000 participants each year.
  •  NEWS   MERL's Power Amplifier Technologies featured in Mitsubishi Electric Corporation press release
    Date: January 12, 2017
    Where: Tokyo, Japan
    MERL Contact: Rui Ma
    Research Areas: Communications, Electronic and Photonic Devices, Electric Systems
    • Mitsubishi Electric Corporation and Mitsubishi Electric Research Laboratories (MERL) announced today the development of an ultra-wideband gallium nitride (GaN) Doherty power amplifier for next generation base stations that is compatible with a world-leading range (company estimate) of frequency bands above 3GHz to cover an operating bandwidth of 600MHz. The technology is expected to help reduce the size and energy consumption of next generation wireless base stations.

      Please see the link below for the full Mitsubishi Electric press release text.
  •  AWARD   APSIPA recognizes Anthony Vetro as a 2016 Industrial Distinguished Leader
    Date: October 15, 2016
    Awarded to: Anthony Vetro
    MERL Contact: Anthony Vetro
    • Anthony Vetro was recognized by APSIPA (Asia-Pacific Signal and Information Processing Association) as a 2016 Industrial Distinguished Leader. This distinction is reserved for selected APSIPA members with extraordinary accomplishments in any of the fields related to APSIPA scope. A list of past recipients can be found online:
  •  TALK   High-Dimensional Analysis of Stochastic Optimization Algorithms for Estimation and Learning
    Date & Time: Tuesday, December 13, 2016; Noon
    Speaker: Yue M. Lu, John A. Paulson School of Engineering and Applied Sciences, Harvard University
    MERL Host: Petros Boufounos
    Research Areas: Computational Sensing, Machine Learning
    • In this talk, we will present a framework for analyzing, in the high-dimensional limit, the exact dynamics of several stochastic optimization algorithms that arise in signal and information processing. For concreteness, we consider two prototypical problems: sparse principal component analysis and regularized linear regression (e.g. LASSO). For each case, we show that the time-varying estimates given by the algorithms will converge weakly to a deterministic "limiting process" in the high-dimensional limit. Moreover, this limiting process can be characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithms. For example, performance metrics such as the MSE, the cosine similarity and the misclassification rate in sparse support recovery can all be obtained by examining the deterministic limiting process. A steady-state analysis of the nonlinear PDE also reveals interesting phase transition phenomena related to the performance of the algorithms. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions.
  •  TALK   Reduced basis methods and their application in data science and uncertainty quantification
    Date & Time: Monday, December 12, 2016; 12:00 PM
    Speaker: Yanlai Chen, Department of Mathematics at the University of Massachusetts Dartmouth
    Research Areas: Control, Dynamical Systems
    • Models of reduced computational complexity is indispensable in scenarios where a large number of numerical solutions to a parametrized problem are desired in a fast/real-time fashion. These include simulation-based design, parameter optimization, optimal control, multi-model/scale analysis, uncertainty quantification. Thanks to an offline-online procedure and the recognition that the parameter-induced solution manifolds can be well approximated by finite-dimensional spaces, reduced basis method (RBM) and reduced collocation method (RCM) can improve efficiency by several orders of magnitudes. The accuracy of the RBM solution is maintained through a rigorous a posteriori error estimator whose efficient development is critical and involves fast eigensolves.

      In this talk, I will give a brief introduction of the RBM/RCM, and explain how they can be used for data compression, face recognition, and significantly delaying the curse of dimensionality for uncertainty quantification.
  •  TALK   Collaborative dictionary learning from big, distributed data
    Date & Time: Friday, December 2, 2016; 11:00 AM
    Speaker: Prof. Waheed Bajwa, Rutgers University
    MERL Host: Petros Boufounos
    Research Area: Computational Sensing
    • While distributed information processing has a rich history, relatively less attention has been paid to the problem of collaborative learning of nonlinear geometric structures underlying data distributed across sites that are connected to each other in an arbitrary topology. In this talk, we discuss this problem in the context of collaborative dictionary learning from big, distributed data. It is assumed that a number of geographically-distributed, interconnected sites have massive local data and they are interested in collaboratively learning a low-dimensional geometric structure underlying these data. In contrast to some of the previous works on subspace-based data representations, we focus on the geometric structure of a union of subspaces (UoS). In this regard, we propose a distributed algorithm, termed cloud K-SVD, for collaborative learning of a UoS structure underlying distributed data of interest. The goal of cloud K-SVD is to learn an overcomplete dictionary at each individual site such that every sample in the distributed data can be represented through a small number of atoms of the learned dictionary. Cloud K-SVD accomplishes this goal without requiring communication of individual data samples between different sites. In this talk, we also theoretically characterize deviations of the dictionaries learned at individual sites by cloud K-SVD from a centralized solution. Finally, we numerically illustrate the efficacy of cloud K-SVD in the context of supervised training of nonlinear classsifiers from distributed, labaled training data.
  •  EVENT   MERL organizes Workshop on End-to-End Speech and Audio Processing at NIPS 2016
    Date: Saturday, December 10, 2016
    Location: Centre Convencions Internacional Barcelona, Barcelona SPAIN
    Research Areas: Machine Learning, Speech & Audio
    • MERL researcher John Hershey, is organizing a Workshop on End-to-End Speech and Audio Processing, on behalf of MERL's Speech and Audio team, and in collaboration with Philemon Brakel of the University of Montreal. The workshop focuses on recent advances to end-to-end deep learning methods to address alignment and structured prediction problems that naturally arise in speech and audio processing. The all day workshop takes place on Saturday, December 10th at NIPS 2016, in Barcelona, Spain.