News & Events

114 were found.


  •  AWARD   Awaya Prize Young Researcher Award
    Date: September 26, 2013
    Awarded to: Jonathan Le Roux
    Awarded for: "A new non-negative dynamical system for speech and audio modeling"
    Awarded by: Acoustical Society of Japan (ASJ)
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  EVENT   CHiME 2013 - The 2nd International Workshop on Machine Listening in Multisource Environments
    Date & Time: Saturday, June 1, 2013; 9:00 AM - 6:00 PM
    MERL Contact: Jonathan Le Roux
    Location: Vancouver, Canada
    Research Areas: Multimedia, Speech & Audio
    Brief
    • MERL researchers Shinji Watanabe and Jonathan Le Roux are members of the organizing committee of CHiME 2013, the 2nd International Workshop on Machine Listening in Multisource Environments, Jonathan acting as Program Co-Chair. MERL is also a sponsor for the event.

      CHiME 2013 is a one-day workshop to be held in conjunction with ICASSP 2013 that will consider the challenge of developing machine listening applications for operation in multisource environments, i.e. real-world conditions with acoustic clutter, where the number and nature of the sound sources is unknown and changing over time. CHiME brings together researchers from a broad range of disciplines (computational hearing, blind source separation, speech recognition, machine learning) to discuss novel and established approaches to this problem. The cross-fertilisation of ideas will foster fresh approaches that efficiently combine the complementary strengths of each research field.
  •  
  •  AWARD   CHiME 2012 Speech Separation and Recognition Challenge Best Performance
    Date: June 1, 2013
    Awarded to: Yuuki Tachioka, Shinji Watanabe, Jonathan Le Roux and John R. Hershey
    Awarded for: "Discriminative Methods for Noise Robust Speech Recognition: A CHiME Challenge Benchmark"
    Awarded by: International Workshop on Machine Listening in Multisource Environments (CHiME)
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The results of the 2nd 'CHiME' Speech Separation and Recognition Challenge are out! The team formed by MELCO researcher Yuuki Tachioka and MERL Speech & Audio team researchers Shinji Watanabe, Jonathan Le Roux and John Hershey obtained the best results in the continuous speech recognition task (Track 2). This very challenging task consisted in recognizing speech corrupted by highly non-stationary noises recorded in a real living room. Our proposal, which also included a simple yet extremely efficient denoising front-end, focused on investigating and developing state-of-the-art automatic speech recognition back-end techniques: feature transformation methods, as well as discriminative training methods for acoustic and language modeling. Our system significantly outperformed other participants. Our code has since been released as an improved baseline for the community to use.
  •  
  •  NEWS   International Workshop on Machine Listening in Multisource Environments (CHiME) 2013: publication by Jonathan Le Roux, John R. Hershey, Shinji Watanabe and others
    Date: June 1, 2013
    Where: International Workshop on Machine Listening in Multisource Environments (CHiME)
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Discriminative Methods for Noise Robust Speech Recognition: A CHiME Challenge Benchmark" by Tachioka, Y., Watanabe, S., Le Roux, J. and Hershey, J.R. was presented at the International Workshop on Machine Listening in Multisource Environments (CHiME)
  •  
  •  NEWS   MERL obtains best results in the 2nd CHiME Speech Separation and Recognition Challenge
    Date: June 1, 2013
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The results of the 2nd CHiME Speech Separation and Recognition Challenge are out! The team formed by MELCO researcher Yuuki Tachioka and MERL Speech & Audio team researchers Shinji Watanabe, Jonathan Le Roux and John Hershey obtained the best results in the continuous speech recognition task (Track 2). This very challenging task consisted in recognizing speech corrupted by highly non-stationary noises recorded in a real living room. Our proposal, which also included a simple yet extremely efficient denoising front-end, focused on investigating and developing state-of-the-art automatic speech recognition back-end techniques: feature transformation methods, as well as discriminative training methods for acoustic and language modeling. Our system significantly outperformed other participants. Our code has since been released as an improved baseline for the community to use.
  •  
  •  EVENT   ICASSP 2013 - Student Career Luncheon
    Date & Time: Thursday, May 30, 2013; 12:30 PM - 2:30 PM
    MERL Contacts: Anthony Vetro; Petros Boufounos; Jonathan Le Roux
    Location: Vancouver, Canada
    Research Areas: Multimedia, Speech & Audio
    Brief
    • MERL is a sponsor for the first ICASSP Student Career Luncheon that will take place at ICASSP 2013. MERL members will take part in the event to introduce MERL and talk with students interested in positions or internships.
  •  
  •  TALK   Practical kernel methods for automatic speech recognition
    Date & Time: Tuesday, May 7, 2013; 2:30 PM
    Speaker: Dr. Yotaro Kubo, NTT Communication Science Laboratories, Kyoto, Japan
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Kernel methods are important to realize both convexity in estimation and ability to represent nonlinear classification. However, in automatic speech recognition fields, kernel methods are not widely used conventionally. In this presentation, I will introduce several attempts to practically incorporate kernel methods into acoustic models for automatic speech recognition. The presentation will consist of two parts. The first part will describes maximum entropy discrimination and its application to a kernel machine training. The second part will describes dimensionality reduction of kernel-based features.
  •  
  •  NEWS   ICLR 2013: publication by Jonathan Le Roux and others
    Date: May 2, 2013
    Where: International Conference on Learning Representations (ICLR)
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Block Coordinate Descent for Sparse NMF" by Potluru, V.K., Plis, S.M., Le Roux, J., Pearlmutter, B.A., Calhoun, V.D. and Hayes, T.P. was presented at the International Conference on Learning Representations (ICLR)
  •  
  •  NEWS   IEEE Signal Processing Letters: publication by Jonathan Le Roux and others
    Date: March 1, 2013
    Where: IEEE Signal Processing Letters
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The article "Consistent Wiener Filtering for Audio Source Separation" by Le Roux, J. and Vincent, E. was published in IEEE Signal Processing Letters
  •  
  •  TALK   Probabilistic Latent Tensor Factorisation
    Date & Time: Tuesday, February 26, 2013; 12:00 PM
    Speaker: Prof. Taylan Cemgil, Bogazici University, Istanbul, Turkey
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Algorithms for decompositions of matrices are of central importance in machine learning, signal processing and information retrieval, with SVD and NMF (Nonnegative Matrix Factorisation) being the most widely used examples. Probabilistic interpretations of matrix factorisation models are also well known and are useful in many applications (Salakhutdinov and Mnih 2008; Cemgil 2009; Fevotte et. al. 2009). In the recent years, decompositions of multiway arrays, known as tensor factorisations have gained significant popularity for the analysis of large data sets with more than two entities (Kolda and Bader, 2009; Cichocki et. al. 2008). We will discuss a subset of these models from a statistical modelling perspective, building upon probabilistic Bayesian generative models and generalised linear models (McCulloch and Nelder). In both views, the factorisation is implicit in a well-defined hierarchical statistical model and factorisations can be computed via maximum likelihood.

      We express a tensor factorisation model using a factor graph and the factor tensors are optimised iteratively. In each iteration, the update equation can be implemented by a message passing algorithm, reminiscent to variable elimination in a discrete graphical model. This setting provides a structured and efficient approach that enables very easy development of application specific custom models, as well as algorithms for the so called coupled (collective) factorisations where an arbitrary set of tensors are factorised simultaneously with shared factors. Extensions to full Bayesian inference for model selection, via variational approximations or MCMC are also feasible. Well known models of multiway analysis such as Nonnegative Matrix Factorisation (NMF), Parafac, Tucker, and audio processing (Convolutive NMF, NMF2D, SF-SSNTF) appear as special cases and new extensions can easily be developed. We will illustrate the approach with applications in link prediction and audio and music processing.
  •  
  •  TALK   Bayesian Group Sparse Learning
    Date & Time: Monday, January 28, 2013; 11:00 AM
    Speaker: Prof. Jen-Tzung Chien, National Chiao Tung University, Taiwan
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Bayesian learning provides attractive tools to model, analyze, search, recognize and understand real-world data. In this talk, I will introduce a new Bayesian group sparse learning and its application on speech recognition and signal separation. First of all, I present the group sparse hidden Markov models (GS-HMMs) where a sequence of acoustic features is driven by Markov chain and each feature vector is represented by two groups of basis vectors. The features across states and within states are represented accordingly. The sparse prior is imposed by introducing the Laplacian scale mixture (LSM) distribution. The robustness of speech recognition is illustrated. On the other hand, the LSM distribution is also incorporated into Bayesian group sparse learning based on the nonnegative matrix factorization (NMF). This approach is developed to estimate the reconstructed rhythmic and harmonic music signals from single-channel source signal. The Monte Carlo procedure is presented to infer two groups of parameters. The future work of Bayesian learning shall be discussed.
  •  
  •  TALK   Speech recognition for closed-captioning
    Date & Time: Tuesday, December 11, 2012; 12:00 PM
    Speaker: Takahiro Oku, NHK Science & Technology Research Laboratories
    Research Areas: Multimedia, Speech & Audio
    Brief
    • In this talk, I will present human-friendly broadcasting research conducted in NHK and research on speech recognition for real-time closed-captioning. The goal of human-friendly broadcasting research is to make broadcasting more accessible and enjoyable for everyone, including children, elderly, and physically challenged persons. The automatic speech recognition technology that NHK has developed makes it possible to create captions for the hearing impaired in real-time automatically. For sports programs such as professional sumo wrestling, a closed-captioning system has already been implemented in which captions are created by using speech recognition on a captioning re-speaker. In 2011, NHK General Television started broadcasting of closed captions for the information program "Morning Market". After the introduction of the implemented closed-captioning system, I will talk about our recent improvement obtained by an adaptation method that creates a more effective acoustic model using error correction results. The method reflects recognition error tendencies more effectively.
  •  
  •  NEWS   APSIPA Transactions on Signal and Information Processing: publication by Shinji Watanabe and others
    Date: December 6, 2012
    Where: APSIPA Transactions on Signal and Information Processing
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The article "Bayesian Approaches to Acoustic Modeling: A Review" by Watanabe, S. and Nakamura, A. was published in APSIPA Transactions on Signal and Information Processing
  •  
  •  NEWS   Techniques for Noise Robustness in Automatic Speech Recognition: publication by Jonathan Le Roux, John R. Hershey and others
    Date: November 28, 2012
    Where: Techniques for Noise Robustness in Automatic Speech Recognition
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The article "Factorial Models for Noise Robust Speech Recognition" by Hershey, J.R., Rennie, S.J. and Le Roux, J. was published in the book Techniques for Noise Robustness in Automatic Speech Recognition
  •  
  •  NEWS   IEEE Signal Processing Magazine: publication by Shinji Watanabe and others
    Date: November 1, 2012
    Where: IEEE Signal Processing Magazine
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The article "Structured Discriminative Models For Speech Recognition" by Gales, M., Watanabe, S. and Fosler-Lussier, E. was published in IEEE Signal Processing Magazine
  •  
  •  TALK   Self-Organizing Units (SOUs): Training Speech Recognizers Without Any Transcribed Audio.
    Date & Time: Wednesday, October 24, 2012; 2:15 PM
    Speaker: Dr. Herb Gish, BBN - Raytheon
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  TALK   Zero-Resource Speech Pattern and Sub-Word Unit Discovery
    Date & Time: Wednesday, October 24, 2012; 9:10 AM
    Speaker: Prof. Jim Glass and Chia-ying Lee, MIT CSAIL
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  TALK   A new class of dynamical system models for speech and audio
    Date & Time: Wednesday, October 24, 2012; 4:05 PM
    Speaker: Dr. John R. Hershey, MERL
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  EVENT   SANE 2012 - Speech and Audio in the Northeast
    Date & Time: Wednesday, October 24, 2012; 8:30 AM - 5:00 PM
    MERL Contact: Jonathan Le Roux
    Location: MERL
    Research Areas: Multimedia, Speech & Audio
    Brief
    • SANE 2012, a one-day event gathering researchers and students in speech and audio from the northeast of the American continent, will be held on Wednesday October 24, 2012 at Mitsubishi Electric Research Laboratories (MERL) in Cambridge, MA.
  •  
  •  TALK   Factorial Hidden Restricted Boltzmann Machines for Noise Robust Speech Recognition
    Date & Time: Wednesday, October 24, 2012; 3:20 PM
    Speaker: Dr. Steven J. Rennie, IBM Research
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •