TR2014-117

Deep Unfolding: Model-Based Inspiration of Novel Deep Architectures


Abstract:

Model-based methods and deep neural networks have both been tremendously successful paradigms in machine learning. In model-based methods, we can easily express our problem domain knowledge in the constraints of the model at the expense of difficulties during inference. Deterministic deep neural networks are constructed in such a way that inference is straightforward, but we sacrifice the ability to easily incorporate problem domain knowledge. The goal of this paper is to provide a general strategy to obtain the advantages of both approaches while avoiding many of their disadvantages. The general idea can be summarized as follows: given a model-based approach that requires an iterative inference method, we unfold the iterations into a layer-wise structure analogous to a neural network. We then de-couple the model parameters across layers to obtain novel neural-network-like architectures that can easily be trained discriminatively using gradient-based methods. The resulting formula combines the expressive power of a conventional deep network with the internal structure of the model-based approach, while allowing inference to be performed in a fixed number of layers that can be optimized for best performance. We show how this framework can be applied to the non-negative matrix factorization to obtain a novel non-negative deep neural network architecture, that can be trained with a multiplicative back-propagation-style update algorithm. We present experiments in the domain of speech enhancement, where we show that the resulting model is able to outperform conventional neural network while only requiring a fraction of the number of parameters. We believe this is due to the ability afforded by our framework to incorporate problem level assumptions into the architecture of the deep network.

 

  • Related News & Events

    •  NEWS    John Hershey gives talk at MIT on Deep Unfolding
      Date: April 28, 2015
      Brief
      • MERL researcher and speech team leader, John Hershey, gave a talk at MIT entitled, "Deep Unfolding: Deriving Novel Deep Network Architectures from Model-based Inference Methods" on April 28, 2015.

        Abstract: Model-based methods and deep neural networks have both been tremendously successful paradigms in machine learning. In model-based methods, problem domain knowledge can be built into the constraints of the model, typically at the expense of difficulties during inference. In contrast, deterministic deep neural networks are constructed in such a way that inference is straightforward, but their architectures are rather generic and it can be unclear how to incorporate problem domain knowledge. This work aims to obtain some of the advantages of both approaches. To do so, we start with a model-based approach and unfold the iterations of its inference method to form a layer-wise structure. This results in novel neural-network-like architectures that incorporate our model-based constraints, but can be trained discriminatively to perform fast and accurate inference. This framework allows us to view conventional sigmoid networks as a special case of unfolding Markov random field inference, and leads to other interesting generalizations. We show how it can be applied to other models, such as non-negative matrix factorization, to obtain a new kind of non-negative deep neural network that can be trained using a multiplicative back propagation-style update algorithm. In speech enhancement experiments we show that our approach is competitive with conventional neural networks, while using fewer parameters.
    •  
    •  NEWS    IEEE Spectrum's "Cars That Think" highlights MERL's speech enhancement research
      Date: March 9, 2015
      MERL Contact: Jonathan Le Roux
      Research Area: Speech & Audio
      Brief
      • Recent research on speech enhancement by MERL's Speech and Audio team was highlighted in "Cars That Think", IEEE Spectrum's blog on smart technologies for cars. IEEE Spectrum is the flagship publication of the Institute of Electrical and Electronics Engineers (IEEE), the world's largest association of technical professionals with more than 400,000 members.
    •  
    •  NEWS    MERL's noise suppression technology featured in Mitsubishi Electric Corporation press release
      Date: February 17, 2015
      MERL Contact: Jonathan Le Roux
      Research Area: Speech & Audio
      Brief
      • Mitsubishi Electric Corporation announced that it has developed breakthrough noise-suppression technology that significantly improves the quality of hands-free voice communication in noisy conditions, such as making a voice call via a car navigation system. Speech clarity is improved by removing 96% of surrounding sounds, including rapidly changing noise from turn signals or wipers, which are difficult to suppress using conventional methods. The technology is based on recent research on speech enhancement by MERL's Speech and Audio team. .
    •