KIỂM TRA ĐỘ MẠNH MẼ CỦA MÔ HÌNH HỌC SÂU BẰNG BA CUỘC TẤN CÔNG ĐỐI KHÁNG
Thông tin bài báo
Ngày nhận bài: 27/04/23                Ngày hoàn thiện: 24/05/23                Ngày đăng: 24/05/23Tóm tắt
Từ khóa
Toàn văn:
PDF (English)Tài liệu tham khảo
[1] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in International conference on machine learning, PMLR, 2018, pp. 284-293.
[2] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physicalworld attacks on deep learning visual classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1625-1634.
[3] S. Bhambri, S. Muku, A. Tulasi, and A. B. Buduru, “A survey of black-box adversarial attacks on computer vision models,” arXiv preprint arXiv:1912.01667, 2019.
[4] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?,” in Proceedings of the 2006 ACM Symposium on Information, computer and communications security, 2006, pp. 16-25.
[5] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” arXiv preprint arXiv:1206.6389, 2012.
[6] F. Behnia, A. Mirzaeian, M. Sabokrou, S. Manoj, T. Mohsenin, K. N. Khasawneh, L. Zhao, H. Homayoun, and A. Sasan, “Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks,” arXiv:2001.06099v1, 2020.
[7] X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning," IEEE transactions on neural networks and learning systems, 2019, pp. 2805-2824.
[8] S I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[9] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square Attack: a query- efficient black- box adversarial attack via random search,” arXiv preprint arXiv:1912.00049v3, 2020.
[10] P. Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C. J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, doi: 10.1145/3128572.3140448.
[11] P. H. Truong, T. N. Hoang, Q.T. Pham, M. T. Pham, and D. T. Pham, “Adversarial attacks into deep learning models using pixel tranformation,” (in Vietnamese), TNU Journal of Science and Technology, vol. 228, no. 02: Natural Sciences - Engineering - Technology, pp. 94-102, 2023.
[12] Tristan, Alex, Kostya, I. J. Roth, J. Hallberg, and T. Spiegel, “Gaussian Noise,” Hasty’s end-to-end ML platform, 2019. [Online]. Available: https://hasty.ai/docs/mp-wiki/augmentations/gaussian-noise. [Accessed Dec. 21, 2022].
[13] P. Lorenz, D. StraBel, M. Keuper, and J. Keuper, “Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?,” arXiv:2112.01601v2, 2022.
[14] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Proceedings of 13th European Conference on Computer Vision–ECCV, Springer International Publishing, 2014, pp. 740-755.
[15] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, IEEE, 2009, pp. 248-255.
[16] C. Y. Wang, A. Bochkovskiy, and H. Y. Liao, “ YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” arXiv: 2207.02696, 2022.
[17] C. Ma, C. Zhao, H. Shi, L. Chen, J. Yong, and D. Zeng, “Metaadvdet: Towards robust detection of evolving adversarial attacks,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 692-701.
DOI: https://doi.org/10.34238/tnu-jst.7842
Các bài báo tham chiếu
- Hiện tại không có bài báo tham chiếu