News & Events

114 were found.


  •  EVENT   SANE 2012 - Speech and Audio in the Northeast
    Date & Time: Wednesday, October 24, 2012; 8:30 AM - 5:00 PM
    MERL Contact: Jonathan Le Roux
    Location: MERL
    Research Areas: Multimedia, Speech & Audio
    Brief
    • SANE 2012, a one-day event gathering researchers and students in speech and audio from the northeast of the American continent, will be held on Wednesday October 24, 2012 at Mitsubishi Electric Research Laboratories (MERL) in Cambridge, MA.
  •  
  •  TALK   Zero-Resource Speech Pattern and Sub-Word Unit Discovery
    Date & Time: Wednesday, October 24, 2012; 9:10 AM
    Speaker: Prof. Jim Glass and Chia-ying Lee, MIT CSAIL
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  TALK   Advances in Acoustic Modeling at IBM Research: Deep Belief Networks, Sparse Representations
    Date & Time: Wednesday, October 24, 2012; 9:55 AM
    Speaker: Dr. Tara Sainath, IBM Research
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  TALK   Recognizing and Classifying Environmental Sounds
    Date & Time: Wednesday, October 24, 2012; 11:00 AM
    Speaker: Prof. Dan Ellis, Columbia University
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  NEWS   HFES 2012: publication by Bret A. Harsham and others
    Date: October 22, 2012
    Where: Annual Meeting of the Human Factors and Ergonomics Society (HFES)
    MERL Contact: Bret Harsham
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Evaluation of Two Types of In-Vehicle Music Retrieval and Navigation Systems" by Zhang, J., Borowsky, A., Schmidt-Nielsen, B., Harsham, B., Weinberg, G., Romoser, M.R.E. and Fisher, D.L. was presented at the Annual Meeting of the Human Factors and Ergonomics Society (HFES)
  •  
  •  TALK   Non-negative Hidden Markov Modeling of Audio
    Date & Time: Thursday, October 11, 2012; 2:30 PM
    Speaker: Dr. Gautham J. Mysore, Adobe
    Research Areas: Multimedia, Speech & Audio
    Brief
    • Non-negative spectrogram factorization techniques have become quite popular in the last decade as they are effective in modeling the spectral structure of audio. They have been extensively used for applications such as source separation and denoising. These techniques however fail to account for non-stationarity and temporal dynamics, which are two important properties of audio. In this talk, I will introduce the non-negative hidden Markov model (N-HMM) and the non-negative factorial hidden Markov model (N-FHMM) to model single sound sources and sound mixtures respectively. They jointly model the spectral structure and temporal dynamics of sound sources, while accounting for non-stationarity. I will also discuss the application of these models to various applications such as source separation, denoising, and content based audio processing, showing why they yield improved performance when compared to non-negative spectrogram factorization techniques.
  •  
  •  TALK   Tensor representation of speaker space for arbitrary speaker conversion
    Date & Time: Thursday, September 6, 2012; 12:00 PM
    Speaker: Dr. Daisuke Saito, The University of Tokyo
    Research Areas: Multimedia, Speech & Audio
    Brief
    • In voice conversion studies, realization of conversion from/to an arbitrary speaker's voice is one of the important objectives. For this purpose, eigenvoice conversion (EVC) based on an eigenvoice Gaussian mixture model (EV-GMM) was proposed. In the EVC, similarly to speaker recognition approaches, a speaker space is constructed based on GMM supervectors which are high-dimensional vectors derived by concatenating the mean vectors of each of the speaker GMMs. In the speaker space, each speaker is represented by a small number of weight parameters of eigen-supervectors. In this talk, we revisit construction of the speaker space by introducing the tensor analysis of training data set. In our approach, each speaker is represented as a matrix of which the row and the column respectively correspond to the Gaussian component and the dimension of the mean vector, and the speaker space is derived by the tensor analysis of the set of the matrices. Our approach can solve an inherent problem of supervector representation, and it improves the performance of voice conversion. Experimental results of one-to-many voice conversion demonstrate the effectiveness of the proposed approach.
  •  
  •  NEWS   IWSML 2012: publication by Jonathan Le Roux, John R. Hershey and others
    Date: March 31, 2012
    Where: International Workshop on Statistical Machine Learning for Speech Processing (IWSML)
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Latent Dirichlet Reallocation for Term Swapping" by Heaukulani, C., Le Roux, J. and Hershey, J.R. was presented at the International Workshop on Statistical Machine Learning for Speech Processing (IWSML)
  •  
  •  NEWS   ASJ 2012: publication by Jonathan Le Roux and John R. Hershey
    Date: March 13, 2012
    Where: Acoustical Society of Japan Spring Meeting (ASJ)
    MERL Contact: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Speech Enhancement by Indirect VTS" by Le Roux, J. and Hershey, J.R. was presented at the Acoustical Society of Japan Spring Meeting (ASJ)
  •  
  •  TALK   Learning Intermediate-Level Representations of Form and Motion from Natural Movies
    Date & Time: Wednesday, February 22, 2012; 11:00 AM
    Speaker: Dr. Charles Cadieu, McGovern Institute for Brain Research, MIT
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The human visual system processes complex patterns of light into a rich visual representation where the objects and motions of our world are made explicit. This remarkable feat is performed through a hierarchically arranged series of cortical areas. Little is known about the details of the representations in the intermediate visual areas. Therefore, we ask the question: can we predict the detailed structure of the representations we might find in intermediate visual areas?

      In pursuit of this question, I will present a model of intermediate-level visual representation that is based on learning invariances from movies of the natural environment and produces predictions about intermediate visual areas. The model is composed of two stages of processing: an early feature representation layer, and a second layer in which invariances are explicitly represented. Invariances are learned as the result of factoring apart the temporally stable and dynamic components embedded in the early feature representation. The structure contained in these components is made explicit in the activities of second-layer units that capture invariances in both form and motion. When trained on natural movies, the first-layer produces a factorization, or separation, of image content into a temporally persistent part representing local edge structure and a dynamic part representing local motion structure. The second-layer units are split into two populations according to the factorization in the first-layer. The form-selective units receive their input from the temporally persistent part (local edge structure) and after training result in a diverse set of higher-order shape features consisting of extended contours, multi-scale edges, textures, and texture boundaries. The motion-selective units receive their input from the dynamic part (local motion structure) and after training result in a representation of image translation over different spatial scales and directions, in addition to more complex deformations. These representations provide a rich description of dynamic natural images, provide testable hypotheses regarding intermediate-level representation in visual cortex, and may be useful representations for artificial visual systems.
  •  
  •  EVENT   Audio and Music Signal Processing Mini-Symposium
    Date & Time: Thursday, October 20, 2011; 2:00 PM -5:00 PM
    MERL Contact: Jonathan Le Roux
    Location: MERL
    Research Areas: Multimedia, Speech & Audio
    Brief
    • MERL is hosting a mini-symposium on audio and music signal processing, with three talks by eminent researchers in the field: Prof. Mark Plumbley, Dr. Cedric Fevotte and Prof. Nobutaka Ono.
  •  
  •  TALK   Analysing Digital Music
    Date & Time: Thursday, October 20, 2011; 2:20 PM
    Speaker: Prof. Mark Plumbley, Queen Mary, London
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  TALK   Itakura-Saito nonnegative matrix factorization and friends for music signal decomposition
    Date & Time: Thursday, October 20, 2011; 3:00 PM
    Speaker: Dr. Cedric Fevotte, CNRS - Telecom ParisTech, Paris
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  TALK   Auxiliary Function Approach to Source Localization and Separation
    Date & Time: Thursday, October 20, 2011; 3:40 PM
    Speaker: Prof. Nobutaka Ono, National Institute of Informatics, Tokyo
    MERL Host: Jonathan Le Roux
    Research Areas: Multimedia, Speech & Audio
  •  
  •  NEWS   International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design 2011: publication by Bret A. Harsham and others
    Date: June 27, 2011
    Where: International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design
    MERL Contact: Bret Harsham
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Investigating HUDs or the Presentation of Choice Lists in Car navigation Systems" by Weinberg, G., Harsham, B. and Medenica, Z. was presented at the International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design
  •  
  •  NEWS   IEEE Multimedia: publication by MERL researchers and others
    Date: January 31, 2011
    Where: IEEE Multimedia
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The article "Multimodal Input in the Car, Today and Tomorrow" by Mueller, C. and Weinberg, G. was published in IEEE Multimedia
  •  
  •  NEWS   Interspeech 2010: publication by MERL researchers and others
    Date: September 26, 2010
    Where: Interspeech
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Vocabulary Independent Spoken Query: a Case for Subword Units" by Gouvea, E. and Ezzat, T. was presented at Interspeech
  •  
  •  NEWS   Annual Conference of the International Speech Communication Association 2010: publication by MERL researchers and others
    Date: September 26, 2010
    Where: Annual Conference of the International Speech Communication Association
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Ungrounded Independent Non-Negative Factor Analysis" by Raj, B., Wilson, K.W., Krueger, A. and Haeb-Umbach, R. was presented at the Annual Conference of the International Speech Communication Association
  •  
  •  NEWS   Speech in Mobile and Pervasive Environments (SiMPE) 2010: publication by Bret A. Harsham and others
    Date: September 7, 2010
    Where: Speech in Mobile and Pervasive Environments (SiMPE)
    MERL Contact: Bret Harsham
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Object-Oriented Multimodality for Safer-In-Vehicle Interfaces" by Weinberg, G. and Harsham, B. was presented at Speech in Mobile and Pervasive Environments (SiMPE)
  •  
  •  NEWS   International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI) 2010: publication by Bret A. Harsham and others
    Date: September 7, 2010
    Where: International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI)
    MERL Contact: Bret Harsham
    Research Areas: Multimedia, Speech & Audio
    Brief
    • The paper "Contextual Push-to-talk: Shortening Voice Dialogs to Improve Driving Performance" by Weinberg, G., Harsham, B., Forlines, C. and Medenica, Z. was presented at the International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI)
  •