ĐỀ XUẤT GIẢI PHÁP LOẠI BỎ NHIỄU ĐỐI KHÁNG SỬ DỤNG MÔ HÌNH TẠO SINH DỰA TRÊN HỌC SÂU
Thông tin bài báo
Ngày nhận bài: 06/11/24                Ngày hoàn thiện: 18/12/24                Ngày đăng: 18/12/24Tóm tắt
Từ khóa
Toàn văn:
PDFTài liệu tham khảo
[1] L. Li, “Application of deep learning in image recognition,” Journal of Physics: Conference Series, vol. 1693, no. 1, 2020, Art. no. 012128.
[2] N. Xu, “The application of deep learning in image processing is studied based on the reel neural network model,” Journal of Physics: Conference Series, vol. 1881, no. 3, 2021, Art. no. 032096.
[3] J. Yang, Y. Sheng, Y. Zhang, W. Jiang, and L. Yang., “On-device unsupervised image segmentation,” 2023 60th ACM/IEEE Design Automation Conference (DAC), IEEE, 2023, pp.1-6.
[4] J. Ma, P. Liang, W. Yu, C. Chen, X. Guo, J. Wu, and J. Jiang, “Infrared and visible image fusion via detail preserving adversarial learning,” Information Fusion, vol. 54, pp. 85-98, 2020.
[5] Y. Shi, Y. Han, Q. Zhang, and X. Kuang, “Adaptive iterative attack towards explainable adversarial robustness,” Pattern recognition, vol. 105, 2020, Art. no. 107309.
[6] Y. Xiao, C. M. Pun, and B. Liu, “Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation,” Pattern Recognition, vol. 115, 2021, Art. no. 107903.
[7] M. O. K. Mendonça, J. Maroto, P. Frossard, and P. S. R. Diniz, “Adversarial training with informed data selection,” in 2022 30th European Signal Processing Conference (EUSIPCO), IEEE, 2022, pp. 608-612.
[8] E. C. Yeats, Y. Chen, and H. Li, “Improving gradient regularization using complex-valued neural networks,” in International Conference on Machine Learning, 2021, pp. 11953-11963.
[9] Z. Liu, Q. Liu, T. Liu, N. Xu, X. Lin, Y. Wang, and W. Wen, “Feature distillation: DNN-oriented jpeg compression against adversarial examples,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2019, pp. 860-868.
[10] X. Jia, X. Wei, X. Cao, and H. Foroosh, “Comdefend: An efficient image compression model to defend adversarial examples,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 6084-6092.
[11] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.
[12] C. Xie, Y. Wu, L. Maaten, A. L. Yuille, and K. He, “Feature denoising for improving adversarial robustness,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 501-509.
[13] J. Chen, X. Zhang, R. Zhang, C. Wang, and L. Liu, “De-pois: An attack-agnostic defense against data poisoning attacks,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 3412-3425, 2021.
[14] Y. Bai, Y. Feng, Y. Wang, T. Dai, S. T. Xia, and Y. Jiang, “Hilbert-based generative defense for adversarial examples,” in Proceedings of the IEEE/CVF International conference on computer vision, 2019, pp. 4784-4793.
[15] A. Shukla, P. Turaga, and S. Anand, “Gracias: Grassmannian of corrupted images for adversarial security,” arXiv preprint arXiv:2005.02936, 2020.
[16] C. Guo, M. Rana, M. Cisse, and L V. D. Maaten, “Countering adversarial images using input transformations,” arXiv preprint arXiv:1711.00117, 2017.
[17] P. H. Truong, C. T. Nguyen, N. M. Pham, D. T. Pham, and T. L. Bui, “A novel Hybrid CIFAR-10 dataset for Adversarial training to enhance the Robustness of Deep learning models,” in The XXVII National Conference “Some Selected Issues on Information and Communication Technology”, 2024, pp. 27-32.
[18] I. J. Goodfellow, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[19] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security, Chapman and Hall/CRC, 2018, pp. 99-112.
[20] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9185-9193.
[21] F. Nesti, A. Biondi, and G. Buttazzo, “Detecting adversarial examples by input transformations, defense perturbations, and voting,” IEEE transactions on neural networks and learning systems, vol. 34, no. 3, pp. 1329-1341, 2021.
[22] W. Zhang, “Generating adversarial examples in one shot with image-to-image translation gan,” IEEE Access, vol. 7, pp. 151103-151119, 2019.
[23] J. J. Bird and A. Lotfi, “Cifake: Image classification and explainable identification of ai-generated synthetic images,” IEEE Access, vol. 12, pp. 15642-15650, 2024.
[24] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, “Defense against adversarial attacks using high-level representation guided denoiser,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1778-1787.
[25] Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman, “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples,” arXiv preprint arXiv:1710.10766, 2017.
[26] P. Samangouei, “Defense-gan: protecting classifiers against adversarial attacks using generative models,” arXiv preprint arXiv:1805.06605, 2018.
[27] Y. Abouelnaga, O. S. Ali, H. Rady, and M. Moustafa, “Cifar-10: Knn-based ensemble of classifiers,” in 2016 International Conference on Computational Science and Computational Intelligence (CSCI), IEEE, 2016, pp. 1192-1195.
[28] D. T. Pham, C. T. Nguyen, P. H. Truong, and N. H. Nguyen, “Automated generation of adaptive perturbed images based on GAN for motivated adversaries on deep learning models,” in Proceedings of the 12th International Symposium on Information and Communication Technology, 2023, pp. 808-815.
DOI: https://doi.org/10.34238/tnu-jst.11486
Các bài báo tham chiếu
- Hiện tại không có bài báo tham chiếu