KỸ THUẬT DIỄN HOẠT KHUÔN MẶT THEO PHÁT ÂM TIẾNG VIỆT
Thông tin bài báo
Ngày nhận bài: 14/02/23                Ngày hoàn thiện: 07/04/23                Ngày đăng: 13/04/23Tóm tắt
Từ khóa
Toàn văn:
PDFTài liệu tham khảo
[1] J. Hwang and K. Park, "Audio-driven Facial Animation: A Survey," in The 13th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 2022, pp. 614-617.
[2] C. S. Chan and F. S. Tsai, "Computer Animation of Facial Emotions," in International Conference on Cyberworlds, Singapore, 2010, pp. 425-429.
[3] Y. Zhang and Y. Ling, "Interactive Narrative Facial Expression Animation Generation by Intuitive Curve Drawing," in IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Lisbon, Portugal, 2021, pp. 406-409.
[4] C. Luo, J. Yu, C. Jiang, R. Li, and Z. Wang, "Real-time control of 3D facial animation," in IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China, 2014, pp. 1-6.
[5] C. Chen, Y. Zhang, P. Xu, S. Lan, S. Li, and Y. Zhang, "Real-time 3D facial expression control based on performance," in 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 2015, pp. 1324-1328.
[6] T. Karras, T. Aila, S. Laine, A. Herva, and J. Lehtinen, “Audio-driven facial animation by joint end-to-end learning of pose and emotion,” ACM Transactions on Graphics, vol. 36, no.4, pp. 1–12, 2017.
[7] D. Cudeiro, T. Bolkart, C. Laidlaw, A. Ranjan, and M. J. Black, "Capture, Learning, and Synthesis of 3D Speaking Styles," in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 10093-10103.
[8] S. Taylor, M. Mahler, B.-J. Theobald, and I. Matthews, “Dynamic units of visual speech,” Proceedings of the 11th ACM SIGGRAPH/ Eurographics conference on Computer Animation, 2012, pp. 275-284.
[9] P. Edwards, C. Landreth, E. Fiume, and K. Singh, “JALI: An animatorcentric viseme model for expressive lip synchronization,” ACM Transactions on Graphics, vol. 35, no. 4, pp. 1–11, 2016.
[10] C. Weerathunga, R. Weerasinghe, and D. Sandaruwan, "Lip Synchronization Modeling for Sinhala Speech," in 20th International Conference on Advances in ICT for Emerging Regions (ICTer), Colombo, Sri Lanka, 2020, pp. 208-213.
[11] R. Kato, Y. Kikuchi, V. Yem, and Y. Ikei, "CV-Mora Based Lip Sync Facial Animations for Japanese Speech," in IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, 2022, pp. 558-559.
[12] H.-T. Dang, T.-H.-Y. Vuong, and X.-H. Phan, "Non-Standard Vietnamese Word Detection and Normalization for Text–to–Speech," in 14th International Conference on Knowledge and Systems Engineering (KSE), Nha Trang, Vietnam, 2022, pp. 1-6.
[13] T. Kinouchi and N. Kitaoka, "A response generation method of chat-bot system using input formatting and reference resolution," in 9th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA), Tokoname, Japan, 2022, pp. 1-6.
[14] T. D. Ngo and T. D. Bui, "A Vietnamese 3D taking face for embodied conversational agents," in The 2015 IEEE RIVF International Conference on Computing & Communication Technologies - Research, Innovation, and Vision for Future (RIVF), Can Tho, Vietnam, 2015, pp. 94-99.
DOI: https://doi.org/10.34238/tnu-jst.7332
Các bài báo tham chiếu
- Hiện tại không có bài báo tham chiếu