ENHANCEMENT OF OBJECTIVE FUNCTION IN IMAGE RECOVERY ATTACKS UNDER GRADIENT COMPRESSION CONDITIONS IN FEDERATED LEARNING | Phi | TNU Journal of Science and Technology

ENHANCEMENT OF OBJECTIVE FUNCTION IN IMAGE RECOVERY ATTACKS UNDER GRADIENT COMPRESSION CONDITIONS IN FEDERATED LEARNING

About this article

Received: 21/03/25                Revised: 05/06/25                Published: 05/06/25

Authors

1. Hoang Van Phi, Le Quy Don Technical University
2. Dao Thi Nga Email to author, Le Quy Don Technical University

Abstract


Image recovery attacks pose a significant privacy threat in distributed machine learning systems, even when gradient compression is employed. These attacks exploit gradient information to reconstruct original training data, raising serious concerns about data confidentiality. This study presents an improved method based on DLG to enhance image recovery accuracy under compressed gradient conditions. The proposed method introduces gradient masking to selectively retain significant gradient components and features a key innovation in the integration of Total Variation and L6-norm regularization terms to enhance image smoothness and mitigate artifacts. Experimental evaluations on MNIST and CIFAR-100 datasets reveal that the improved method significantly outperforms traditional DLG and HCGLA methods, particularly under extreme compression rates. By reducing visual distortions while preserving structural details, the proposed method provides valuable insights for enhancing data security in distributed learning and developing robust defenses against gradient compression attacks.

Keywords


Deep leakage from gradients; Image recovery attack; Distributed machine learning; Data security; Gradient compression

Full Text:

PDF

References


[1] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, "Communication-Efficient Learning of Deep Networks from Decentralized Data," in International Conference on Artificial Intelligence and Statistics, 2016, pp. 1273-1282.

[2] J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, "Inverting Gradients - How easy is it to break privacy in federated learning?," Advances In Neural Information Processing Systems, vol. 33, pp. 16937-16947, 2020.

[3] B. Zhao, K. R. Mopuri, and H. Bilen, "iDLG: Improved Deep Leakage from Gradients," arXiv preprint arXiv:.02610, 2020.

[4] L. Zhu, Z. Liu, and S. Han, "Deep leakage from gradients," in Proceedings of the 33rd International Conference on Neural Information Processing Systems: Curran Associates Inc., 2019, Art. no. 1323.

[5] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, "QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding," in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 1707-1718.

[6] A. F. Aji and K. Heafield, "Sparse Communication for Distributed Gradient Descent," ArXiv, vol. abs/1704.05021, 2017.

[7] H. Yang, M. Ge, K. Xiang, and J. Li, "Using Highly Compressed Gradients in Federated Learning for Data Reconstruction Attacks," IEEE Transactions on Information Forensics and Security, vol. 18, pp. 818-830, 2023.

[8] Y. Lin, S. Han, H. Mao, Y. Wang, and W. J. Dally, "Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training," ArXiv, vol. abs/1712.01887, 2017.

[9] W. Wei et al., "A Framework for Evaluating Gradient Leakage Attacks in Federated Learning," ArXiv, vol. abs/2004.10397, 2020.

[10] J. Jeon, K. Lee, S. Oh, and J. Ok, "Gradient inversion with generative image prior," Advances in neural information processing systems, vol. 34, pp. 29898-29908, 2021.

[11] A. Mahendran and A. Vedaldi, "Understanding Deep Image Representations by Inverting Them," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5188-5196.

[12] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, pp. 436-444, 2015.

[13] X. Zhang, Matrix Analysis and Applications. Cambridge University Press, 2017.

[14] L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D: Nonlinear Phenomena, vol. 60, pp. 259-268, 1992.

[15] X. Zhang, M. Burger, and S. Osher, "A Unified Primal-Dual Algorithm Framework Based on Bregman Iteration," Journal of Scientific Computing, vol. 46, pp. 20-46, 2010.

[16] S. P. Boyd and L. Vandenberghe, "Convex Optimization," IEEE Transactions on Automatic Control, vol. 51, pp. 1859-1859, 2010.

font-family:"Times New Roman",serif;mso-fareast-font-family:Calibri;mso-fareast-theme-font:

minor-latin;mso-bidi-theme-font:minor-bidi;mso-ansi-language:EN-US;mso-fareast-language:

EN-US;mso-bidi-language:AR-SA'>

font-family:"Times New Roman",serif;mso-fareast-font-family:Calibri;mso-fareast-theme-font:

minor-latin;mso-bidi-theme-font:minor-bidi;mso-ansi-language:EN-US;mso-fareast-language:

EN-US;mso-bidi-language:AR-SA'>




DOI: https://doi.org/10.34238/tnu-jst.12360

Refbacks

  • There are currently no refbacks.
TNU Journal of Science and Technology
Rooms 408, 409 - Administration Building - Thai Nguyen University
Tan Thinh Ward - Thai Nguyen City
Phone: (+84) 208 3840 288 - E-mail: jst@tnu.edu.vn
Based on Open Journal Systems
©2018 All Rights Reserved