[1] 刘迪, 奚雪峰, 崔志明, 等. 抽取-生成式自动文本摘要技术研究综述[J]. 计算机技术与发展, 2023, 33(5): 1-8.
LIU D, XI X F, CUI Z M, et al. Review of research on extractive-abstractive automatic text summarization technology[J]. Computer Technology and Development, 2023, 33(5): 1-8.
[2] MOHAN M J, SUNITHA C, GANESH A, et al. A study on ontology based abstractive summarization[J]. Procedia Computer Science, 2016, 87: 32-37.
[3] SAKHARE D Y, KUMAR R, JANMEDA S. Development of embedded platform for Sanskrit grammar-based document summarization[M]//Speech and language processing for human-machine communications. Singapore: Springer, 2018: 41-50.
[4] ZHANG Y S, NI A S, YU T, et al. An exploratory study on long dialogue summarization: what works and what’s next[C]//Findings of the Association for Computational Linguistics. Stroudsburg: ACL, 2021: 4426-4433.
[5] 李健智, 王红玲, 王中卿. 基于场景与对话结构的摘要生成研究[J]. 计算机工程, 2023, 49(4): 303-311.
LI J Z, WANG H L, WANG Z Q. Research on summarization generation based on scene and dialogue structure[J]. Computer Engineering, 2023, 49(4): 303-311.
[6] GOO C W, CHEN Y N. Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts[C]//Proceedings of the 2018 IEEE Spoken Language Technology Workshop. Piscataway: IEEE, 2018: 735-742.
[7] LIU C, WANG P, XU J, et al. Automatic dialogue summary generation for customer service[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 2019: 1957-1965.
[8] NARAYAN S, ZHAO Y, MAYNEZ J, et al. Planning with learned entity prompts for abstractive summarization[J]. Transactions of the Association for Computational Linguistics, 2021, 9: 1475-1492.
[9] LIU Z Y, NG A, LEE S, et al. Topic-aware pointer-generator networks for summarizing spoken conversations[C]//Proceedings of the 2019 IEEE Automatic Speech Recognition and Understanding Workshop. Piscataway: IEEE, 2019: 814-821.
[10] SHIN J, YU H, MOON H, et al. Dialogue summaries as dialogue states (DS2), template-guided summarization for few-shot dialogue state tracking[C]//Findings of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 3824-3846.
[11] FENG Xiachong, FENG Xiaocheng, QIN B. Incorporating commonsense knowledge into abstractive dialogue summarization via heterogeneous graph networks[C]//Proceedings of the 20th Chinese National Conference on Computational Linguistics, Huhhot, China, 2021: 964-975.
[12] FERNANDES P, ALLAMANIS M, BROCKSCHMIDT M. Structured neural summarization[J]. arXiv:1811.01824, 2018.
[13] GLIWA B, MOCHOL I, BIESEK M, et al. SAMSum Corpus: a human-annotated dialogue dataset for abstractive summarization[J]. arXiv:1911.12237, 2019.
[14] ZHAO L, XU W, GUO J. Improving abstractive dialogue summarization with graph structures and topic words[C]//Proceedings of the 28th International Conference on Computational Linguistics. Spain: International Committee on Computational Linguistics, 2020: 437-449.
[15] CHEN J, YANG D. Multi-view sequence-to-sequence models with conversational structure for abstractive dialogue summarization[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2020: 4106-4118.
[16] LECLAIR A, HAQUE S, WU L, et al. Improved code summarization via a graph neural network[C]//Proceedings of the 28th International Conference on Program Comprehension. New York: Association for Computing Machinery, 2020: 184-195.
[17] WANG D, LIU P, ZHENG Y, et al. Heterogeneous graph neural networks for extractive document summarization[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2020: 6209-6219.
[18] ZHAO L L, ZENG W H, XU W R, et al. Give the truth: incorporate semantic slot into abstractive dialogue summarization[C]//Findings of the Association for Computational Linguistics. Stroudsburg: ACL, 2021: 2435-2446.
[19] FENG Xiachong, FENG Xiaocheng, QIN B, et al. Dialogue discourse-aware graph model and data augmentation for meeting summarization[C]//Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021: 3808-3814.
[20] GAO S, CHENG X, LI M Z, et al. Dialogue summarization with static-dynamic structure fusion graph[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 13858-13873.
[21] 陈明轩, 肖诗斌, 王洪俊. 基于深度学习的生成式文本摘要综述[J]. 软件导刊, 2024, 23(5): 212-220.
CHEN M X, XIAO S B, WANG H J. A survey of deep learning-based generative text summarization[J]. Software Guide, 2024, 23(5): 212-220.
[22] LIU P F, YUAN W Z, FU J L, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9).
[23] DING N, HU S D, ZHAO W L, et al. OpenPrompt: an open-source framework for prompt-learning[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Stroudsburg: ACL, 2022: 105-113.
[24] DOU Z Y, LIU P F, HAYASHI H, et al. GSum: a general framework for guided neural abstractive summarization[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 4830-4842.
[25] LI L, ZHANG Y F, CHEN L. Personalized prompt learning for explainable recommendation[J]. ACM Transactions on Information Systems, 2023, 41(4): 1-26.
[26] DING N, CHEN Y, HAN X, et al. Prompt-learning for fine-grained entity typing[C]//Findings of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2022: 6888-6901.
[27] NAIR V, SCHUMACHER E, KANNAN A. Generating medically-accurate summaries of patient-provider dialogue: a multi-stage approach using large language models[C]//Proceedings of the 5th Clinical Natural Language Processing Workshop. Stroudsburg: Association for Computational Linguistics, 2023: 200-217.
[28] XIE K, YU T, WANG H, et al. Few-shot dialogue summarization via skeleton-assisted prompt transfer[J]. arXiv:2305. 12077, 2023.
[29] ZHOU Y, RINGEVAL F, PORTET F. Can GPT models follow human summarization guidelines? Evaluating ChatGPT and GPT-4 for dialogue summarization[J]. arXiv:2310.16810, 2023.
[30] LEWIS M, LIU Y, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2020: 7871-7880.
[31] MIHALCEA R, TARAU P. TextRank: brining order into texts[C]//Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2004: 404-411.
[32] BLEI D M, NG A Y, JORDAN M I. Latent dirichlet allocation [J]. Journal of Machine Learning Research, 2003, 3(1): 993-1022.
[33] ZHONG M, YIN D, YU T, et al. QMSum: a new benchmark for query-based multi-domain meeting summarization[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 5905-5921.
[34] CHEN Y L, LIU Y, CHEN L, et al. DialogSum: a real-life scenario dialogue summarization dataset[C]//Findings of the Association for Computational Linguistics. Stroudsburg: ACL, 2021: 5062-5074.
[35] LIN C Y, HOVY E. Automatic evaluation of summaries using ngram co-occurrence statistics[C]//Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, 2003: 150-157.
[36] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017.
[37] ZHU C, XU R, ZENG M, et al. A hierarchical network for abstractive meeting summarization with cross-domain pretraining[C]//Findings of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2020: 194-203.
[38] CHEN J, YANG D. Structure-aware abstractive conversation summarization via discourse and action graphs[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: Association for Computational Linguistics, 2021: 1380-1391.
[39] LIU Z, CHEN N F. Controllable neural dialogue summari-zation with personal named entity planning[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2021: 92-106.
[40] YOO C, LEE H. Improving abstractive dialogue summarization using keyword extraction[J]. Applied Sciences, 2023, 13(17): 9771.
[41] LIU Z Y, WANG Z Y, WANG J H. A coarse-to-fine training paradigm forDialogue summarization[C]//Proceedings of International Conference on Artificial Neural Networks and Machine Learning. Cham: Springer, 2022: 416-427.
[42] ZHANG Z, LI J H. Topic-features for dialogue summarization[C]//Proceedings of the International Conference on Artificial Neural Networks Natural Language Processing and Chinese Computing. Cham: Springer International Publishing, 2022: 327-338. |