XÂY DỰNG MÔ HÌNH HỌC SÂU HIỆU QUẢ ĐỂ NHẬN DẠNG BỆNH NGOÀI DA DỰA TRÊN TỰ CHƯNG CẤT KIẾN THỨC
Thông tin bài báo
Ngày nhận bài: 28/10/22                Ngày hoàn thiện: 22/11/22                Ngày đăng: 22/11/22Tóm tắt
Từ khóa
Toàn văn:
PDFTài liệu tham khảo
[1] The Skin Cancer Foundation, “Skin Cancer Facts & Statistics,” May 2022. [Online]. Available: https://www.skincancer.org/skin-cancer-information/skin-cancer-facts/. [Accessed Sept. 1, 2022].
[2] Melanoma UK, “2020 Melanoma skin cancer report,” May 2020. [Online]. Available: https://www.melanomauk.org.uk/2020-melanoma-skin-cancer-report. [Accessed Sept. 1, 2022].
[3] R. K. Voss, T. N. Woods, K. D. Cromwell, K. C. Nelson, and J. N. Cormier, “Improving outcomes in patients with melanoma: strategies to ensure an early diagnosis,” Patient related outcome measures, vol. 6, pp. 229-242, 2015.
[4] H. Kittler, H. Pehamberger, K. Wolff, and M. J. T. I. O. Binder, “Diagnostic accuracy of dermoscopy,” The lancet oncology, vol. 3, no. 3, pp. 159-165, 2022.
[5] T. J. Brinker, A. Hekler, A. H. Enk, J. Klode, A. Hauschild, C. Berking, and P. Schrüfer, “A convolutional neural network trained with dermoscopic images performed on par with dermatologists in a clinical melanoma image classification task,” European Journal of Cancer, vol. 111, pp. 148-154, 2019.
[6] T. J. Brinker, A. Hekler, A. H. Enk, J. Klode, A. Hauschild, C. Berking, and P. Schrüfer, “Deep learning outperformed dermatologists in a head-to-head dermoscopic melanoma image classification task,” European Journal of Cancer, vol. 113, pp. 47-54, 2019.
[7] E. Valle, M. Fornaciali, A. Menegola, J. Tavares, F. V. Bittencourt, L. T. Li, and S. Avila, “Data, depth, and design: Learning reliable models for skin lesion analysis,” Neurocomputing, vol. 383, pp. 303-313, 2020.
[8] J. Zhang, Y. Xie, Y. Xia, and C. Shen, “Attention residual learning for skin lesion classification,” IEEE transactions on medical imaging, vol. 38, no. 9, pp. 2092-2103, 2019.
[9] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” Annual Conference on Neural Information Processing Systems, 2015, pp. 1-9.
[10] R. Adriana, B. Nicolas, K. S. Ebrahimi, C. Antoine, G. Carlo, and B. Yoshua, “Fitnets: Hints for thin deep nets,” Proc. International Conference on Learning Representation (ICLR), 2015, pp. 1-13.
[11] T. Guo, C. Xu, S. He, B. Shi, C. Xu, and D. Tao, “Robust student network learning,” IEEE transactions on neural networks and learning systems, vol. 31, no. 7, pp. 2455-2468, 2019.
[12] D. Q. Vu, N. Le, and J. C. Wang, “Teaching yourself: A self-knowledge distillation approach to action recognition,” IEEE Access, vol. 9, pp. 105711-105723, 2021.
[13] D. Q. Vu and J. C. Wang, “A novel self-knowledge distillation approach with siamese representation learning for action recognition,” International Conference on Visual Communications and Image Processing (VCIP), 2021, pp. 1-5.
[14] Q. V. Duc, T. Phung, M. Nguyen, B. Y. Nguyen, and T. H. Nguyen, “Self-knowledge Distillation: An Efficient Approach for Falling Detection,” International Conference on Artificial Intelligence and Big Data in Digital Era, 2022, pp. 369-380.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[16] P. Tschandl, C. Rosendahl, and H. Kittler, “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Scientific data, vol. 5, no. 1, pp. 1-9, 2018.
[17] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 702-703.
[18] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” Thirty-first AAAI conference on artificial intelligence, 2017, pp. 1-12.
[19] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[20] K. Simonyan, and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” The 3rd International Conference on Learning Representations (ICLR), 2015, pp. 1-14.
[21] J. Huang and C. X. Ling, “Using AUC and accuracy in evaluating learning algorithms,” IEEE Transactions on knowledge and Data Engineering, vol. 17, no. 3, pp. 299-310, 2005.
[22] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, and X. Tang, “Residual attention network for image classification,” IEEE conference on computer vision and pattern recognition, 2017, pp. 3156-3164.
[23] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141.
[24] J. Zhang, Y. Xie, Y. Xia, and C. Shen, “Attention residual learning for skin lesion classification,” IEEE transactions on medical imaging, vol. 38, no.9, pp. 2092-2103, 2019.
[25] S. K. Datta, M. A. Shaikh, S. N. Srihari, and M. Gao, “Soft Attention Improves Skin Cancer Classification Performance,” Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data, 2021, pp. 13-23.
[26] N. Gessert, M. Nielsen, M. Shaikh, R. Werner, and A. Schlaefer, “Skin lesion classification using ensembles of multi-resolution EfficientNets with meta data,” MethodsX, vol. 7, pp. 1-8, 2020.
DOI: https://doi.org/10.34238/tnu-jst.6803
Các bài báo tham chiếu
- Hiện tại không có bài báo tham chiếu