XÂY DỰNG MÔ HÌNH TỰ ĐỘNG PHÂN VÙNG KHOẢNG SÁNG SAU GÁY TRÊN ẢNH SIÊU ÂM THAI
Thông tin bài báo
Ngày nhận bài: 24/04/24                Ngày hoàn thiện: 10/06/24                Ngày đăng: 11/06/24Tóm tắt
Từ khóa
Toàn văn:
PDFTài liệu tham khảo
[1] Y. Deng, Y. Wang, and P. Chen, “Automated detection of fetal nuchal translucency based on hierarchical structural model,” 2010 IEEE 23rd International Symposium on Computer-Based Medical Systems (CBMS), IEEE, 2010, pp. 78-84.
[2] S. Nirmala and V. Palanisamy, “Measurement of nuchal translucency thickness for detection of chromosomal abnormalities using first trimester ultrasound fetal images,” arXiv preprint arXiv:10011986, 2010.
[3] A. Anzalone, G. Fusco, F. Isgrò, et al., “A system for the automatic measurement of the nuchal translucency thickness from ultrasound video stream of the foetus,” Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, IEEE, 2013, pp. 239-244.
[4] J. Park, M. Sofka, S. Lee, et al., “Automatic nuchal translucency measurement from ultrasonography,” International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2013, pp. 243-250.
[5] G. Sciortino, E. Orlandi, C. Valenti, et al., “Wavelet analysis and neural network classifiers to detect mid-sagittal sections for nuchal translucency measurement,” Image Analysis & Stereology, vol. 35, no. 2, pp. 105-115, 2016.
[6] G. Sciortino, D. Tegolo, and C. Valenti, “Automatic detection and measurement of nuchal translucency,” Computers in Biology and Medicine, vol. 82, pp. 12-20, 2017.
[7] G. Sciortino, D. Tegolo, and C. Valenti, “A non-supervised approach to locate and to measure the nuchal translucency by means of wavelet analysis and neural networks,” 2017 XXVI International Conference on Information, Communication and Automation Technologies (ICAT), IEEE, 2017, pp.1-7.
[8] G. Sciortino, D. Tegolo, and C. Valenti, “Morphological analysis combined with a machine learning approach to detect utrasound median sagittal sections for the nuchal translucency measurement,” Mexican Conference on Pattern Recognition, Springer, 2017, pp. 257-267.
[9] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
[10] O. Ronneberger, P. Fischer and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Springer International Publishing, 2015, part III, pp. 234-241.
[11] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” In Proceedings of Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Springer International Publishing, 2018, vol. 4, pp. 3-11.
[12] L. C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
[13] L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 801-818.
DOI: https://doi.org/10.34238/tnu-jst.10205
Các bài báo tham chiếu
- Hiện tại không có bài báo tham chiếu