MỘT PHƯƠNG PHÁP KẾT HỢP CÁC MÔ HÌNH HỌC SÂU VÀ KỸ THUẬT HỌC TĂNG CƯỜNG HIỆU QUẢ CHO TÓM TẮT VĂN BẢN HƯỚNG TRÍCH RÚT
Thông tin bài báo
Ngày nhận bài: 13/07/21                Ngày hoàn thiện: 12/08/21                Ngày đăng: 12/08/21Tóm tắt
Từ khóa
Toàn văn:
PDFTài liệu tham khảo
[1] M. Wasson, “Using leading text for news summaries: Evaluation results and implications for commercial summarization applications,” Proceedings of COLING 1998 vol. 2: The 17th International Conference on Computational Linguistics, 1998, pp. 1364-1368.
[2] G. Erkan and D. R. Radev, “LexRank: Graph-based Lexical Centrality as Salience in Text Summarization,” Journal of Artificial Intelligence Research, vol. 22, pp. 457-479, 2004.
[3] R. Mihalcea and P. Tarau, "TextRank: Bringing Order into Texts," Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004, pp. 404-411.
[4] J. Carbonell and J. Goldstein, “The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries,” Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, 1998, pp. 335-336.
[5] Y. Zhang, J. E. Meng, and M. Pratama, “Extractive Document Summarization Based on Convolutional Neural Networks,” In IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society, 2016, pp. 918-922.
[6] J. Cheng and M. Lapata, “Neural summarization by extracting sentences and words,” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, vol. 1, 2016, pp. 484-494.
[7] Q. Zhou, N. Yang, F. Wei, S. Huang, M. Zhou, and T. Zhao, “Neural Document Summarization by Jointly Learning to Score and Select Sentences,” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 1, 2018, pp. 654-663.
[8] K. Al-Sabahi, Z. Zuping, and M. Nadher, “A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS),” IEEE Access, vol. 6, pp. 24205-24212, 2018.
[9] M. Zhong, P. Liu, Y. Chen, D. Wang, X. Qiu, and X. Huang, “Extractive Summarization as Text Matching,” Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 6197-6208.
[10] C. Rioux, S. A. Hasan, and Y. Chali, “Fear the REAPER: A system for automatic multidocument summarization with reinforcement learning,” Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 681-690.
[11] S. Hen, M. Mieskes, and I. Gurevych, “A reinforcement learning approach for adaptive single and multi-document summarization,” Proceedings of International Conference of the German Society for Computational Linguistics and Language Technology, 2015, pp. 3-12.
[12] S. Narayan, S. B. Cohen, and M. Lapata, “Ranking Sentences for Extractive Summarization with Reinforcement Learning,” Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, 2018, pp. 1747-1759.
[13] Q. U. Nguyen, T. A. Pham, C. D. Truong, and X. H. Nguyen, “A Study on the Use of Genetic Programming for Automatic Text Summarization,” Proceedings of 2012 Fourth International Conference on Knowledge and Systems Engineering, 2012, pp. 93-98.
[14] Q. T. Lam, T. P. Pham, and D. H. Do, “Automatic Vietnamese Text Summarization with Model Sequence-to-sequence,” (in Vietnamese), Scientific Journal of Can Tho University, Special topic: Information Technology, pp. 125-132, 2017.
[15] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” Proceedings of the 26th International Conference on Neural Information Processing Systems, vol. 2, 2013, pp. 3111-3119.
[16] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” Proceedings of the 2014 Conference on EMNLP, 2014, pp. 1532-1543.
[17] J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” Proceedings of NAACL-HLT 2019, 2019, pp. 4171-4186.
[18] I. Turc, M. W. Chang, K. Lee, and K. Toutanova, “Well-Read Students Learn Better: On the Importance of Pre-training Compact Models,” arXiv:1908.08962 [cs.CL], 2019.
[19] T. Pires, E. Schlinger, and D. Garrette, “How multilingual is Multilingual BERT?,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 4996-5001.
[20] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. R. Miller, “Playing Atari with Deep Reinforcement Learning,” arXiv:1312.5602v1 [cs.LG], 2013.
[21] C. Y. Lin, “Rouge: A package for automatic evaluation of summaries,” 2004. [Online]. Available: https://aclanthology.org/W04-1013.pdf. [Accessed July 11, 2021].
[22] Y. Kim, “Convolutional neural networks for sentence classification,” Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 1746-1751.
[23] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” Proceedings of the 27th International Conference on Neural Information Processing Systems, vol. 2, 2014, pp. 3104-3112.
[24] K. Cho, B. V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation,” Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 1724-1734.
[25] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom, "Teaching machines to read and comprehend,” Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 1, 2015, pp. 1693-1701.
[26] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving Language Understanding by Generative Pre-Training,” 2018. [Online]. Available: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf. [Accessed April 23, 2021].DOI: https://doi.org/10.34238/tnu-jst.4747
Các bài báo tham chiếu
- Hiện tại không có bài báo tham chiếu





