Deep Unfolding: Model-Based Inspiration of Novel Deep Architectures

Model-based methods and deep neural networks have both been tremendously successful paradigms in machine learning. In model-based methods, we can easily express our problem domain knowledge in the constraints of the model at the expense of difficulties during inference. Deterministic deep neural networks are constructed in such a way that inference is straightforward, but we sacrifice the ability to easily incorporate problem domain knowledge. The goal of this paper is to provide a general strategy to obtain the advantages of both approaches while avoiding many of their disadvantages. The general idea can be summarized as follows: given a model-based approach that requires an iterative inference method, we unfold the iterations into a layer-wise structure analogous to a neural network. We then de-couple the model parameters across layers to obtain novel neural-network-like architectures that can easily be trained discriminatively using gradient-based methods. The resulting formula combines the expressive power of a conventional deep network with the internal structure of the model-based approach, while allowing inference to be performed in a fixed number of layers that can be optimized for best performance. We show how this framework can be applied to the non-negative matrix factorization to obtain a novel non-negative deep neural network architecture, that can be trained with a multiplicative back-propagation-style update algorithm. We present experiments in the domain of speech enhancement, where we show that the resulting model is able to outperform conventional neural network while only requiring a fraction of the number of parameters. We believe this is due to the ability afforded by our framework to incorporate problem level assumptions into the architecture of the deep network.