TR2022-080

Data Privacy and Protection on Deep Leakage from Gradients by Layer-Wise Pruning


    •  Liu, B., Koike-Akino, T., Wang, Y., Kim, K.J., Brand, M.E., Aeron, S., Parsons, K., "Data Privacy and Protection on Deep Leakage from Gradients by Layer-Wise Pruning", IEEE Information Theory and Applications Workshop (ITA), June 2022.
      BibTeX TR2022-080 PDF Presentation
      • @inproceedings{Liu2022jun,
      • author = {Liu, Bryan and Koike-Akino, Toshiaki and Wang, Ye and Kim, Kyeong Jin and Brand, Matthew E. and Aeron, Shuchin and Parsons, Kieran},
      • title = {Data Privacy and Protection on Deep Leakage from Gradients by Layer-Wise Pruning},
      • booktitle = {IEEE Information Theory and Applications Workshop (ITA)},
      • year = 2022,
      • month = jun,
      • url = {https://www.merl.com/publications/TR2022-080}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Machine Learning

Abstract:

In this paper, we study a data privacy and protection problem in a federated learning system for image classification. We assume that an attacker has full knowledge of the shared gradients during the model update. We propose a layer-wise pruning defense to prevent data leakage from the attacker. We also propose a sequential update attack method, which accumulates the information across training epochs. Simulation results show that the sequential update can gradually improve the image reconstruction results for the attacker. Moreover, the layer-wise pruning method is shown to be more efficient than classical element-wise threshold-based pruning on the shared gradients.