TR2021-082

Robust Machine Learning via Privacy/Rate-Distortion Theory


Abstract:

Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples. Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem. The saddle point of the game between a robust classifier and an adversarial perturbation can be found via the solution of a maximum conditional entropy problem. This information-theoretic perspective sheds light on the fundamental tradeoff between robustness and clean data performance, which ultimately arises from the geometric structure of the underlying data distribution and perturbation constraints.

 

  • Related Video

  • Related Research Highlights

  • Related Publication

  •  Wang, Y., Aeron, S., Rakin, A.S., Koike-Akino, T., Moulin, P., "Robust Machine Learning via Privacy/Rate-Distortion Theory", arXiv, May 2021.
    BibTeX arXiv
    • @article{Wang2021may,
    • author = {Wang, Ye and Aeron, Shuchin and Rakin, Adnan S and Koike-Akino, Toshiaki and Moulin, Pierre},
    • title = {Robust Machine Learning via Privacy/Rate-Distortion Theory},
    • journal = {arXiv},
    • year = 2021,
    • month = may,
    • url = {https://arxiv.org/abs/2007.11693}
    • }