TR2022-006

Model Compression Using Optimal Transport


    •  Lohit, S., Jones, M.J., "Model Compression Using Optimal Transport", IEEE Winter Conference on Applications of Computer Vision (WACV), January 2022.
      BibTeX TR2022-006 PDF Presentation
      • @inproceedings{Lohit2022jan,
      • author = {Lohit, Suhas and Jones, Michael J.},
      • title = {Model Compression Using Optimal Transport},
      • booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      • year = 2022,
      • month = jan,
      • url = {https://www.merl.com/publications/TR2022-006}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

Abstract:

Model compression methods are important to allow for easier deployment of deep learning models in compute, memory and energy-constrained environments such as mobile phones. Knowledge distillation is a class of model compression algorithm where knowledge from a large teacher network is transferred to a smaller student network thereby improving the student's performance. In this paper, we show how optimal transport-based loss functions can be used for training a student network which encourages learning student network parameters that help bring the distribution of student features closer to that of the teacher features. We present image classification results on CIFAR-100, SVHN and ImageNet and show that the proposed optimal transport loss function performs comparably to or better than other loss functions.