ESTIMATING ROBUSTNESS OF DEEP LEARNING MODELS BY THREE ADVERSARIAL ATTACKS | Hồ | TNU Journal of Science and Technology

ESTIMATING ROBUSTNESS OF DEEP LEARNING MODELS BY THREE ADVERSARIAL ATTACKS

About this article

Received: 27/04/23                Revised: 24/05/23                Published: 24/05/23

Authors

1. Truong Phi Ho Email to author, Vietnam Academy of Cryptography Techniques
2. Le Thi Ngoc anh, Vietnam Academy of Cryptography Techniques
3. Phan Xuan Khiem, Vietnam Academy of Cryptography Techniques
4. Pham Duy Trung, Vietnam Academy of Cryptography Techniques

Abstract


Deep learning is currently an area of interest in research and development by scientists around the world. Deep learning models are deployed and applied in practice for work and social life. However, deep learning has many potential risks related to security in applications, especially recently adversarial attacks using adversarial examples are a big challenge for deep learning in particular and machine learning in general. To test the robustness of the machine learning model, we propose to use three adversarial attacks to calculate the benchmark, the experimental attack methods on the MS-COCO dataset are being used to train the machine learning model, training and testing for the YOLO model. The article summarizes the results of the successful attack rate using the proposed indicators according to the research through the experimental process conducted by the authors to verify the robustness of the deep learning model in general. The comprehensive experiments in the study were performed on the YOLOv7 model to test and evaluate the robustness of the YOLOv7 model, which is also a popularly used deep learning model and is considered to be advanced today.

Keywords


Adversarial attack; Targeted attack; Non-targeted attack; Robustness; Benchmark

Full Text:

PDF

References


[1] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in International conference on machine learning, PMLR, 2018, pp. 284-293.

[2] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physicalworld attacks on deep learning visual classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1625-1634.

[3] S. Bhambri, S. Muku, A. Tulasi, and A. B. Buduru, “A survey of black-box adversarial attacks on computer vision models,” arXiv preprint arXiv:1912.01667, 2019.

[4] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?,” in Proceedings of the 2006 ACM Symposium on Information, computer and communications security, 2006, pp. 16-25.

[5] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” arXiv preprint arXiv:1206.6389, 2012.

[6] F. Behnia, A. Mirzaeian, M. Sabokrou, S. Manoj, T. Mohsenin, K. N. Khasawneh, L. Zhao, H. Homayoun, and A. Sasan, “Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks,” arXiv:2001.06099v1, 2020.

[7] X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning," IEEE transactions on neural networks and learning systems, 2019, pp. 2805-2824.

[8] S I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.

[9] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square Attack: a query- efficient black- box adversarial attack via random search,” arXiv preprint arXiv:1912.00049v3, 2020.

[10] P. Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C. J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, doi: 10.1145/3128572.3140448.

[11] P. H. Truong, T. N. Hoang, Q.T. Pham, M. T. Pham, and D. T. Pham, “Adversarial attacks into deep learning models using pixel tranformation,” (in Vietnamese), TNU Journal of Science and Technology, vol. 228, no. 02: Natural Sciences - Engineering - Technology, pp. 94-102, 2023.

[12] Tristan, Alex, Kostya, I. J. Roth, J. Hallberg, and T. Spiegel, “Gaussian Noise,” Hasty’s end-to-end ML platform, 2019. [Online]. Available: https://hasty.ai/docs/mp-wiki/augmentations/gaussian-noise. [Accessed Dec. 21, 2022].

[13] P. Lorenz, D. StraBel, M. Keuper, and J. Keuper, “Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?,” arXiv:2112.01601v2, 2022.

[14] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Proceedings of 13th European Conference on Computer Vision–ECCV, Springer International Publishing, 2014, pp. 740-755.

[15] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, IEEE, 2009, pp. 248-255.

[16] C. Y. Wang, A. Bochkovskiy, and H. Y. Liao, “ YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv: 2207.02696, 2022.

[17] C. Ma, C. Zhao, H. Shi, L. Chen, J. Yong, and D. Zeng, “Metaadvdet: Towards robust detection of evolving adversarial attacks,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 692-701.




DOI: https://doi.org/10.34238/tnu-jst.7842

Refbacks

  • There are currently no refbacks.
TNU Journal of Science and Technology
Rooms 408, 409 - Administration Building - Thai Nguyen University
Tan Thinh Ward - Thai Nguyen City
Phone: (+84) 208 3840 288 - E-mail: jst@tnu.edu.vn
Based on Open Journal Systems
©2018 All Rights Reserved