TR2016-134

A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices


    •  Matsumoto, W., Hagiwara, M., Boufounos, P.T., Fukushima, K., Mariyama, T., Xiongxin, Z., "A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices", International Conference on Neural Information Processing (ICONIP), DOI: 10.1007/​978-3-319-46681-1_48, October 2016, vol. 9950, pp. 397-404.
      BibTeX TR2016-134 PDF
      • @inproceedings{Matsumoto2016oct,
      • author = {Matsumoto, Wataru and Hagiwara, Manabu and Boufounos, Petros T. and Fukushima, Kunihiko and Mariyama, Toshisada and Xiongxin, Zhao},
      • title = {A Deep Neural Network Architecture Using Dimensionality Reduction with Sparse Matrices},
      • booktitle = {International Conference on Neural Information Processing (ICONIP)},
      • year = 2016,
      • volume = 9950,
      • pages = {397--404},
      • month = oct,
      • doi = {10.1007/978-3-319-46681-1_48},
      • issn = {0302-9743},
      • isbn = {978-3-319-46681-1},
      • url = {https://www.merl.com/publications/TR2016-134}
      • }
  • MERL Contact:
  • Research Area:

    Computational Sensing

Abstract:

We present a new deep neural network architecture, motivated by sparse random matrix theory that uses a low-complexity embedding through a sparse matrix instead of a conventional stacked autoencoder. We regard autoencoders as an information-preserving dimensionality reduction method, similar to random projections in compressed sensing. Thus, exploiting recent theory on sparse matrices for dimensionality reduction, we demonstrate experimentally that classification performance does not deteriorate if the autoencoder is replaced with a computationally-efficient sparse dimensionality reduction matrix.