News & Events

77 Events and Talks were found.




  •  EVENT   Society for Industrial and Applied Mathematics panel for students on careers in industry
    Date & Time: Monday, July 10, 2017; 6:15 PM - 7:15 PM
    Speaker: Andrew Knyazev and other panelists, MERL
    MERL Contacts: Joseph Katz; Andrew Knyazev
    Location: David Lawrence Convention Center, Pittsburgh PA
    Research Areas: Electronics & Communications, Multimedia, Data Analytics, Computer Vision, Mechatronics, Algorithms, Advanced Control Systems, Computational Geometry, Computational Photography, Computational Sensing, Decision Optimization, Digital Video, Dynamical Systems, Information Security, Machine Learning, Optical Communications & Devices, Power & RF, Predictive Modeling, Speech & Audio, Wireless Communications & Signal Processing
    Brief
    • Andrew Knyazev accepted an invitation to represent MERL at the panel on Student Careers in Business, Industry and Government at the annual meeting of the Society for Industrial and Applied Mathematics (SIAM).

      The format consists of a five minute introduction by each of the panelists covering their background and an overview of the mathematical and computational challenges at their organization. The introductions will be followed by questions from the students.
  •  
  •  EVENT   MERL to participate in Xconomy Forum on AI & Robotics
    Date & Time: Tuesday, March 28, 2017; 1:30 - 5:30PM
    MERL Contacts: John R. Hershey; Joseph Katz; Daniel N. Nikovski; Alan Sullivan; Jay E. Thornton; Anthony Vetro; Richard C. (Dick) Waters; Jinyun Zhang
    Location: Google (355 Main St., 5th Floor, Cambridge MA)
    Research Areas: Multimedia, Data Analytics, Computer Vision, Mechatronics
    Brief
    • How will AI and robotics reshape the economy and create new opportunities (and challenges) across industries? Who are the hottest companies that will compete with the likes of Google, Amazon, and Uber to create the future? And what are New England innovators doing to strengthen the local cluster and help lead the national discussion?

      MERL will be participating in Xconomy's third annual conference on AI and robotics in Boston to address these questions. MERL President & CEO, Dick Waters, will be on a panel discussing the status and future of self-driving vehicles. Lab members will also be on hand demonstrate and discuss recent advances AI and robotics technology.

      The agenda and registration for the event can be found online: https://xconomyforum85.eventbrite.com
  •  
  •  TALK   Generative Model-Based Text-to-Speech Synthesis
    Date & Time: Wednesday, February 1, 2017; 12:00-13:00
    Speaker: Dr. Heiga ZEN, Google
    MERL Host: Chiori Hori
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Recent progress in generative modeling has improved the naturalness of synthesized speech significantly. In this talk I will summarize these generative model-based approaches for speech synthesis such as WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems.
      See https://deepmind.com/blog/wavenet-generative-model-raw-audio/ for further details.
  •  
  •  TALK   High-Dimensional Analysis of Stochastic Optimization Algorithms for Estimation and Learning
    Date & Time: Tuesday, December 13, 2016; Noon
    Speaker: Yue M. Lu, John A. Paulson School of Engineering and Applied Sciences, Harvard University
    MERL Host: Petros T. Boufounos
    Research Areas: Multimedia, Computational Sensing, Machine Learning
    Brief
    • In this talk, we will present a framework for analyzing, in the high-dimensional limit, the exact dynamics of several stochastic optimization algorithms that arise in signal and information processing. For concreteness, we consider two prototypical problems: sparse principal component analysis and regularized linear regression (e.g. LASSO). For each case, we show that the time-varying estimates given by the algorithms will converge weakly to a deterministic "limiting process" in the high-dimensional limit. Moreover, this limiting process can be characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithms. For example, performance metrics such as the MSE, the cosine similarity and the misclassification rate in sparse support recovery can all be obtained by examining the deterministic limiting process. A steady-state analysis of the nonlinear PDE also reveals interesting phase transition phenomena related to the performance of the algorithms. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions.
  •  
  •  TALK   Collaborative dictionary learning from big, distributed data
    Date & Time: Friday, December 2, 2016; 11:00 AM
    Speaker: Prof. Waheed Bajwa, Rutgers University
    MERL Host: Petros T. Boufounos
    Research Areas: Multimedia, Computational Sensing
    Brief
    • While distributed information processing has a rich history, relatively less attention has been paid to the problem of collaborative learning of nonlinear geometric structures underlying data distributed across sites that are connected to each other in an arbitrary topology. In this talk, we discuss this problem in the context of collaborative dictionary learning from big, distributed data. It is assumed that a number of geographically-distributed, interconnected sites have massive local data and they are interested in collaboratively learning a low-dimensional geometric structure underlying these data. In contrast to some of the previous works on subspace-based data representations, we focus on the geometric structure of a union of subspaces (UoS). In this regard, we propose a distributed algorithm, termed cloud K-SVD, for collaborative learning of a UoS structure underlying distributed data of interest. The goal of cloud K-SVD is to learn an overcomplete dictionary at each individual site such that every sample in the distributed data can be represented through a small number of atoms of the learned dictionary. Cloud K-SVD accomplishes this goal without requiring communication of individual data samples between different sites. In this talk, we also theoretically characterize deviations of the dictionaries learned at individual sites by cloud K-SVD from a centralized solution. Finally, we numerically illustrate the efficacy of cloud K-SVD in the context of supervised training of nonlinear classsifiers from distributed, labaled training data.
  •  
  •  EVENT   MERL organizes Workshop on End-to-End Speech and Audio Processing at NIPS 2016
    Date: Saturday, December 10, 2016
    MERL Contact: John R. Hershey
    Location: Centre Convencions Internacional Barcelona, Barcelona SPAIN
    Research Areas: Multimedia, Machine Learning, Speech & Audio
    Brief
    • MERL researcher John Hershey, is organizing a Workshop on End-to-End Speech and Audio Processing, on behalf of MERL's Speech and Audio team, and in collaboration with Philemon Brakel of the University of Montreal. The workshop focuses on recent advances to end-to-end deep learning methods to address alignment and structured prediction problems that naturally arise in speech and audio processing. The all day workshop takes place on Saturday, December 10th at NIPS 2016, in Barcelona, Spain.
  •  
  •  EVENT   John Hershey to present tutorial at the 2016 IEEE SLT Workshop
    Date: Tuesday, December 13, 2016
    Speaker: John Hershey, MERL
    MERL Contacts: John R. Hershey; Jonathan Le Roux; Shinji Watanabe
    Location: 2016 IEEE Spoken Language Technology Workshop, San Diego, California
    Research Areas: Multimedia, Machine Learning, Speech & Audio
    Brief
    • MERL researcher John Hershey presents an invited tutorial at the 2016 IEEE Workshop on Spoken Language Technology, in San Diego, California. The topic, "developing novel deep neural network architectures from probabilistic models" stems from MERL work with collaborators Jonathan Le Roux and Shinji Watanabe, on a principled framework that seeks to improve our understanding of deep neural networks, and draws inspiration for new types of deep network from the arsenal of principles and tools developed over the years for conventional probabilistic models. The tutorial covers a range of parallel ideas in the literature that have formed a recent trend, as well as their application to speech and language.
  •  
  •  EVENT   2016 IEEE Workshop on Spoken Language Technology: Sponsored by MERL
    Date: Tuesday, December 13, 2016 - Friday, December 16, 2016
    MERL Contact: John R. Hershey
    Location: San Diego, California
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The IEEE Workshop on Spoken Language Technology is a premier international showcase for advances in spoken language technology. The theme for 2016 is "machine learning: from signal to concepts," which reflects the current excitement about end-to-end learning in speech and language processing. This year, MERL is showing its support for SLT as one of its top sponsors, along with Amazon and Microsoft.
  •  
  •  EVENT   MERL Open House
    Date & Time: Thursday, December 8, 2016; 4:00-7:00pm
    MERL Contacts: Elizabeth Phillips; Anthony Vetro
    Location: 201 Broadway, 8th Floor, Cambridge, MA
    Research Areas: Electronics & Communications, Multimedia, Data Analytics, Computer Vision, Mechatronics, Algorithms, Business Innovation
    Brief
    • Snacks, demos, science: On Thursday 12/8, Mitsubishi Electric Research Labs (MERL) will host an open house for graduate+ students interested in internships, post-docs, and research scientist positions. The event will be held from 4-7pm and will feature demos & short presentations in our main areas of research: algorithms, multimedia, electronics, communications, computer vision, speech processing, optimization, machine learning, data analytics, mechatronics, dynamics, control, and robotics. MERL is a high impact publication-oriented research lab with very extensive internship and university collaboration programs. Most internships lead to publication; many of our interns and staff have gone on to notable careers at MERL and in academia. Come mix with our researchers, see our state of the art technologies, and learn about our research opportunities. Dress code: casual, with resumes.

      Pre-registration for the event is strongly encouraged:
      https://www.eventbrite.com/e/merl-open-house-tickets-29408503626

      Current internship and employment openings:
      http://www.merl.com/internship/openings
      http://www.merl.com/employment/employment
  •  
  •  EVENT   MERL participating in Engineering Career Fair
    Date & Time: Wednesday, November 16, 2016; 3:30-6:30pm
    MERL Contacts: Elizabeth Phillips; Anthony Vetro
    Location: Sheraton Commander (16 Garden Street, Cambridge, MA)
    Research Areas: Electronics & Communications, Multimedia, Data Analytics, Computer Vision, Mechatronics, Algorithms, Business Innovation
    Brief
    • MERL will be participating in the Engineering Career Fair Collaborative, which is being held on November 16, 2016 at the Sheraton Commander in Cambridge from 3:30-6:30pm. Graduate students with an interest in learning about internship and other employment opportunities at MERL are invited to visit our booth. Staff members will be on hand to discuss current openings. We will also be showing some demonstrations of current research projects.

      Current internship and employment openings:
      http://www.merl.com/internship/openings
      http://www.merl.com/employment/employment
  •  
  •  EVENT   SANE 2016 - Speech and Audio in the Northeast
    Date: Friday, October 21, 2016
    MERL Contacts: John R. Hershey; Jonathan Le Roux; Shinji Watanabe
    Location: MIT, McGovern Institute for Brain Research, Cambridge, MA
    Research Areas: Multimedia, Speech & Audio
    Brief
    • SANE 2016, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, will be held on Friday October 21, 2016 at MIT's Brain and Cognitive Sciences Department, at the McGovern Institute for Brain Research, in Cambridge, MA.

      It is a follow-up to SANE 2012 (Mitsubishi Electric Research Labs - MERL), SANE 2013 (Columbia University), SANE 2014 (MIT CSAIL), and SANE 2015 (Google NY). Since the first edition, the audience has steadily grown, gathering 140 researchers and students in 2015.

      SANE 2016 will feature invited talks by leading researchers: Juan P. Bello (NYU), William T. Freeman (MIT/Google), Nima Mesgarani (Columbia University), DAn Ellis (Google), Shinji Watanabe (MERL), Josh McDermott (MIT), and Jesse Engel (Google). It will also feature a lively poster session during lunch time, open to both students and researchers.

      SANE 2016 is organized by Jonathan Le Roux (MERL), Josh McDermott (MIT), Jim Glass (MIT), and John R. Hershey (MERL).
  •  
  •  EVENT   MERL Hosts 2nd Annual Women In Science Celebration
    Date & Time: Friday, July 22, 2016; 12:00 Noon
    MERL Contacts: Elizabeth Phillips; Jinyun Zhang
    Location: Cambridge Brewery
    Research Areas: Electronics & Communications, Multimedia, Data Analytics, Computer Vision, Mechatronics, Algorithms
    Brief
    • MERL hosted its 2nd Annual "Women In Science Celebration". MERL's current team of female interns discussed and celebrated the contributions they've made during their internships at MERL.
  •  
  •  EVENT   MERL celebrates 25 years of innovation
    Date: Thursday, June 2, 2016
    MERL Contacts: Elizabeth Phillips; Anthony Vetro
    Location: Norton's Woods Conference Center at American Academy of Arts & Sciences, Cambridge, MA
    Research Areas: Electronics & Communications, Multimedia, Data Analytics, Computer Vision, Mechatronics, Algorithms, Business Innovation
    Brief
    • MERL celebrated 25 years of innovation on Thursday, June 2 at the Norton's Woods Conference Center at the American Academy of Arts & Sciences in Cambridge, MA. The event was a great success, with inspiring keynote talks, insightful panel sessions, and an exciting research showcase of MERL's latest breakthroughs.

      Please visit the event page to view photos of each session, video presentations, as well as a commemorative booklet that highlights past and current research.
  •  
  •  TALK   Speech structure and its application to speech processing -- Relational, holistic and abstract representation of speech
    Date & Time: Friday, June 3, 2016; 1:30PM - 3:00PM
    Speaker: Nobuaki Minematsu and Daisuke Saito, The University of Tokyo
    MERL Host: Shinji Watanabe
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Speech signals covey various kinds of information, which are grouped into two kinds, linguistic and extra-linguistic information. Many speech applications, however, focus on only a single aspect of speech. For example, speech recognizers try to extract only word identity from speech and speaker recognizers extract only speaker identity. Here, irrelevant features are often treated as hidden or latent by applying the probability theory to a large number of samples or the irrelevant features are normalized to have quasi-standard values. In speech analysis, however, phases are usually removed, not hidden or normalized, and pitch harmonics are also removed, not hidden or normalized. The resulting speech spectrum still contains both linguistic information and extra-linguistic information. Is there any good method to remove extra-linguistic information from the spectrum? In this talk, our answer to that question is introduced, called speech structure. Extra-linguistic variation can be modeled as feature space transformation and our speech structure is based on the transform-invariance of f-divergence. This proposal was inspired by findings in classical studies of structural phonology and recent studies of developmental psychology. Speech structure has been applied to accent clustering, speech recognition, and language identification. These applications are also explained in the talk.
  •  
  •  EVENT   John Hershey Invited to Speak at Deep Learning Summit 2016 in Boston
    Date: Thursday, May 12, 2016 - Friday, May 13, 2016
    MERL Contact: John R. Hershey
    Location: Deep Learning Summit, Boston, MA
    Research Areas: Multimedia, Speech & Audio
    Brief
    • MERL Speech and Audio Senior Team Leader John Hershey is among a set of high-profile researchers invited to speak at the Deep Learning Summit 2016 in Boston on May 12-13, 2016. John will present the team's groundbreaking work on general sound separation using a novel deep learning framework called Deep Clustering. For the first time, an artificial intelligence is able to crack the half-century-old "cocktail party problem", that is, to isolate the speech of a single person from a mixture of multiple unknown speakers, as humans do when having a conversation in a loud crowd.
  •  
  •  TALK   Advanced Recurrent Neural Networks for Automatic Speech Recognition
    Date & Time: Friday, April 29, 2016; 12:00 PM - 1:00 PM
    Speaker: Yu Zhang, MIT
    MERL Host: Shinji Watanabe
    Research Areas: Multimedia, Speech & Audio
    Brief
    • A recurrent neural network (RNN) is a class of neural network models where connections between its neurons form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Recently the RNN-based acoustic models greatly improved automatic speech recognition (ASR) accuracy on many tasks, such as an advanced version of the RNN, which exploits a structure called long-short-term memory (LSTM). However, ASR performance with distant microphones, low resources, noisy, reverberant conditions, and on multi-talker speech are still far from satisfactory as compared to humans. To address these issues, we develop new strucute of RNNs inspired by two principles: (1) the structure follows the intuition of human speech recognition; (2) the structure is easy to optimize. The talk will go beyond basic RNNs, introduce prediction-adaptation-correction RNNs (PAC-RNNs) and highway LSTMs (HLSTMs). It studies both uni-directional and bi-direcitonal RNNs and discriminative training also applied on top the RNNs. For efficient training of such RNNs, the talk will describe two algorithms for learning their parameters in some detail: (1) Latency-Controlled bi-directional model training; and (2) Two pass forward computation for sequence training. Finally, this talk will analyze the advantages and disadvantages of different variants and propose future directions.
  •  
  •  EVENT   MERL to celebrate 25 years of innovation
    Date: Thursday, June 2, 2016
    MERL Contacts: Elizabeth Phillips; Anthony Vetro
    Location: Norton's Woods Conference Center at American Academy of Arts & Sciences, Cambridge, MA
    Research Areas: Algorithms, Data Analytics, Electronics & Communications, Computer Vision, Mechatronics, Multimedia
    Brief
    • A celebration event to mark MERL's 25th anniversary will be held on Thursday, June 2 at the Norton's Woods Conference Center at the American Academy of Arts & Sciences in Cambridge, MA. This event will feature keynote talks, panel sessions, and a research showcase. The event itself is invitation-only, but videos and other highlights will be made available online. Further details about the program can be obtained at the link below.
  •  
  •  TALK   Driver's mental workload estimation based on the reflex eye movement
    Date & Time: Tuesday, March 15, 2016; 12:45 PM - 1:30 PM
    Speaker: Prof. Hirofumi Aoki, Nagoya University
    MERL Host: Shinji Watanabe
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Driving requires a complex skill that is involved with the vehicle itself (e.g., speed control and instrument operation), other road users (e.g., other vehicles, pedestrians), surrounding environment, and so on. During driving, visual cues are the main source to supply information to the brain. In order to stabilize the visual information when you are moving, the eyes move to the opposite direction based on the input to the vestibular system. This involuntary eye movement is called as the vestibulo-ocular reflex (VOR) and the physiological models have been studied so far. Obinata et al. found that the VOR can be used to estimate mental workload. Since then, our research group has been developing methods to quantitatively estimate mental workload during driving by means of reflex eye movement. In this talk, I will explain the basic mechanism of the reflex eye movement and how to apply for mental workload estimation. I also introduce the latest work to combine the VOR and OKR (optokinetic reflex) models for naturalistic driving environment.
  •  
  •  TALK   A data-centric approach to driving behavior research: How can signal processing methods contribute to the development of autonomous driving?
    Date & Time: Tuesday, March 15, 2016; 12:00 PM - 12:45 PM
    Speaker: Prof. Kazuya Takeda, Nagoya University
    MERL Host: Shinji Watanabe
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Thanks to advanced "internet of things" (IoT) technologies, situation-specific human behavior has become an area of development for practical applications involving signal processing. One important area of development of such practical applications is driving behavior research. Since 1999, I have been collecting driving behavior data in a wide range of signal modalities, including speech/sound, video, physical/physiological sensors, CAN bus, LIDAR and GNSS. The objective of this data collection is to evaluate how well signal models can represent human behavior while driving. In this talk, I would like to summarize our 10 years of study of driving behavior signal processing, which has been based on these signal corpora. In particular, statistical signal models of interactions between traffic contexts and driving behavior, i.e., stochastic driver modeling, will be discussed, in the context of risky lane change detection. I greatly look forward to discussing the scalability of such corpus-based approaches, which could be applied to almost any traffic situation.
  •  
  •  TALK   Emotion Detection for Health Related Issues
    Date & Time: Tuesday, February 16, 2016; 12:00 PM - 1:00 PM
    Speaker: Dr. Najim Dehak, MIT
    MERL Host: Shinji Watanabe
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Recently, there has been a great increase of interest in the field of emotion recognition based on different human modalities, such as speech, heart rate etc. Emotion recognition systems can be very useful in several areas, such as medical and telecommunications. In the medical field, identifying the emotions can be an important tool for detecting and monitoring patients with mental health disorder. In addition, the identification of the emotional state from voice provides opportunities for the development of automated dialogue system capable of producing reports to the physician based on frequent phone communication between the system and the patients. In this talk, we will describe a health related application of using emotion recognition system based on human voices in order to detect and monitor the emotion state of people.
  •