ENHANCES THE ROBUSTNESS OF DEEP LEARNING MODELS USING ROBUST SPARSE PCA TO DENOISE ADVERSARIAL IMAGES | Hồ | TNU Journal of Science and Technology

ENHANCES THE ROBUSTNESS OF DEEP LEARNING MODELS USING ROBUST SPARSE PCA TO DENOISE ADVERSARIAL IMAGES

About this article

Received: 08/11/23                Revised: 07/12/23                Published: 07/12/23

Authors

1. Truong Phi Ho, Vietnam Academy of Cryptography Techniques
2. Truong Quang Binh, School of Information and Communication Technology - Hanoi University of Science and Technology
3. Nguyen Vinh Quang, Vietnam Academy of Cryptography Techniques
4. Nguyen Nhat Hai, School of Information and Communication Technology - Hanoi University of Science and Technology
5. Pham Duy Trung Email to author, Vietnam Academy of Cryptography Techniques

Abstract


Recent years have demonstrated the rapid development of artificial intelligence. Deep learning applications have been widely developed in life such as object recognition, face recognition, automatic vehicle operation, and even medicine, etc. However, these systems contain many risks from adversarial attacks on deep learning models. Attackers often use examples containing small perturbations that are barely perceptible to the naked eye and can fool even deep learning models. Many studies have shown that the creation of adversarial examples largely depends on adding perturbations to clean image. In this paper, the authors propose to use the Sparse Principal Component Analysis (PCA) method to denoise adversarial images. With the experimental results, the authors have demonstrated that the Robust sparse PCA method is effective in selecting and classifying key features of the image to remove unwanted noise present in the input image. The image after denoising has been accurately classified by machine learning model.

Keywords


Deep learning; Adversarial examples; Image features; Sparse PCA; Robustness of model

Full Text:

PDF

References


[1] L. Li, “Application of deep learning in image recognition,” Journal of Physics: Conference Series, IOP Publishing, vol. 1693, no. 1, 2020, Art. no. 012128.

[2] N. Xu, “The application of deep learning in image processing is studied based on the reel neural network model,” Journal of Physics: Conference Series, IOP Publishing, vol. 1881, no. 3, 2021, Art. no. 032096.

[3] J. Yang, Y. Sheng, Y. Zhang, W. Jiang, and L. Yang, “On-device unsupervised image segmentation,” arXiv - CS - Computer Vision and Pattern Recognition, 2023, doi: arxiv-2303.12753.

[4] J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Information Fusion, vol. 54, pp. 85–98, 2020.

[5] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndiˇc, P. Laskov, ´G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Proceedings of Machine Learning and Knowledge Discovery in Databases: European Conference, Springer, 2013, Part III 13, pp. 387–402.

[6] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv - CS - Computer Vision and Pattern Recognition, 2013, doi: 10.48550/arXiv.1312.6199.

[7] Y. Shi, Y. Han, Q. Zhang, and X. Kuang, “Adaptive iterative attack towards explainable adversarial robustness,” Pattern recognition, vol. 105, 2020, Art. no. 107309.

[8] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv - CS - Computer Vision and Pattern Recognition, 2016, doi: 10.48550/arXiv.1611.01236.

[9] F. Tramer, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” International Conference on Learning Representations (ICLR), 2018, doi: 10.48550/arXiv.1705.07204.

[10] C. Xie, Y. Wu, L. V. D. Maaten, A. L. Yuille, and K. He, “Feature denoising for improving adversarial robustness,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 501–509.

[11] J. Chen, X. Zhang, R. Zhang, C. Wang, and L. Liu, “De-pois: An attack agnostic defense against data poisoning attacks,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 3412–3425, 2021.

[12] Y. Bai, Y. Feng, Y. Wang, T. Dai, S.-T. Xia, and Y. Jiang, “Hilbertbased generative defense for adversarial examples,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 4784–4793.

[13] A. Shukla, P. Turaga, and S. Anand, “Gracias: Grassmannian of corrupted images for adversarial security,” arXiv - CS - Computer Vision and Pattern Recognition, 2020, doi: 10.48550/arXiv.2005.02936.

[14] C. Guo, M. Rana, M. Cisse, and L. V. D. Maaten, “Countering adversarial images using input transformations,” arXiv preprint arXiv:1711.00117, 2017.

[15] M. O. Mendonc¸a, J. Maroto, P. Frossard, and P. S. Diniz, “Adversarial training with informed data selection,” in 2022 30th European Signal Processing Conference (EUSIPCO), IEEE, 2022, pp. 608–612.

[16] E. C. Yeats, Y. Chen, and H. Li, “Improving gradient regularization using complex-valued neural networks,” in International Conference on Machine Learning, PMLR, 2021, pp. 11 953–11 963.

[17] F. Nesti, A. Biondi and G. Buttazzo, "Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 3, pp. 1329-1341, 2023, doi: 10.1109/TNNLS.2021.3105238.

[18] X. Jia, X. Wei, X. Cao, and H. Foroosh, “Comdefend: An efficient image compression model to defend adversarial examples,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 6084–6092.

[19] P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-gan: Protecting classifiers against adversarial attacks using generative models,” arXiv preprint arXiv:1805.06605, 2018.

[20] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, “Defense against adversarial attacks using high-level representation guided denoiser,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1778–1787.

[21] Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman, “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples,” arXiv preprint arXiv:1710.10766, 2017.

[22] C. Croux, P. Filzmoser, and H. Fritz, “Robust sparse principal component analysis,” Technometrics, vol. 55, no. 2, pp. 202–214, 2013.

[23] Bhagoji, Arjun Nitin, Daniel Cullina, and Prateek Mittal, “Dimensionality reduction as a defense against evasion attacks on machine learning classifiers,” arXiv preprint arXiv:1704.02654, 2017.

[24] D. Hendrycks and K. Gimpel, “Early methods for detecting adversarial images,” arXiv preprint arXiv:1608.00530, 2016.

[25] P. Kaur and J. Singh, “A study on the effect of gaussian noise on psnr value for digital images,” International journal of computer and electrical engineering, vol. 3, no. 2, p. 319, 2011.

[26] R. Guerra-Urzola, K. V. Deun, J. C. Vera, and K. Sijtsma, “A guide for sparse pca: model comparison and applications,” Psychometrika, vol. 86, no. 4, pp. 893–919, 2021.

[27] V. Todorov and P. Filzmoser, “Comparing classical and robust sparse pca,” in Synergies of soft computing and statistics for intelligent data analysis. Springer, 2013, pp. 283–291.

[28] J. Li, “Robust sparse estimation tasks in high dimensions,” arXiv preprint arXiv:1702.05860, 2017.

[29] Y. Abouelnaga, O. S. Ali, H. Rady, and M. Moustafa, “Cifar-10: Knn- based ensemble of classifiers,” in 2016 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2016, pp. 1192–1195.

[30] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. IEEE, 2009, pp. 248–255.




DOI: https://doi.org/10.34238/tnu-jst.9166

Refbacks

  • There are currently no refbacks.
TNU Journal of Science and Technology
Rooms 408, 409 - Administration Building - Thai Nguyen University
Tan Thinh Ward - Thai Nguyen City
Phone: (+84) 208 3840 288 - E-mail: jst@tnu.edu.vn
Based on Open Journal Systems
©2018 All Rights Reserved