TR2022-147

What Makes a “Good” Data Augmentation in Knowledge Distillation – A Statistical Perspective


    •  Wang, H., Lohit, S., Jones, M.J., Fu, R., "What Makes a “Good” Data Augmentation in Knowledge Distillation – A Statistical Perspective", Advances in Neural Information Processing Systems (NeurIPS), November 2022.
      BibTeX TR2022-147 PDF
      • @inproceedings{Wang2022nov,
      • author = {Wang, Huan and Lohit, Suhas and Jones, Michael J. and Fu, Raymond},
      • title = {What Makes a “Good” Data Augmentation in Knowledge Distillation – A Statistical Perspective},
      • booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
      • year = 2022,
      • month = nov,
      • url = {https://www.merl.com/publications/TR2022-147}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

Abstract:

Knowledge distillation (KD) is a general neural network training approach that uses a teacher to guide a student. Existing works mainly study KD from the network output side (e.g., trying to design a better KD loss function), while few have attempted to understand it from the input side. Especially, its interplay with data augmentation (DA) has not been well understood. In this paper, we ask: Why do some DA schemes (e.g., CutMix) inherently perform much better than others in KD? What makes a “good” DA in KD? Our investigation from a statistical perspective suggests that a good DA scheme should reduce the variance of the teacher’s mean probability, which will eventually lead to a lower generalization gap for the student. Besides the theoretical understanding, we also introduce a new entropy-based data-mixing DA scheme to enhance CutMix. Extensive empirical studies support our claims and demonstrate how we can harvest considerable performance gains simply by using a better DA scheme in knowledge distillation.