TR2021-125

Towards Universal Adversarial Examples and Defenses


Abstract:

Adversarial example attacks have recently exposed the severe vulnerability of neural network models. However, most of the existing attacks require some form of target model information (i.e., weights/model inquiry/architecture) to improve the efficacy of the attack. We leverage the information-theoretic connections between robust learning and generalized rate-distortion theory to formulate a universal adversarial example (UAE) generation algorithm. Our algorithm trains an offline adversarial generator to minimize the mutual information of a given data distribution. At the inference phase, our UAE can efficiently generate effective adversary examples without high computation cost.These adversarial examples in turn allow for developing universal defense responses through adversarial training. Our experiments demonstrate promising gains in improving the training efficiency of conventional adversarial training

 

  • Related Video

  • Related Research Highlights