[1] 李旭军, 王珺, 余孟. 融合预训练和注意力增强的中文自动摘要研究[J]. 计算机工程与应用, 2023, 59(14): 134-141.
LI X J, WANG J, YU M. Research on automatic Chinese summarization combining pre-training and attention enhancement[J]. Computer Engineering and Applications, 2023, 59(14): 134-141.
[2] ZHONG M, LIU P F, CHEN Y R, et al. Extractive summarization as text matching[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 6197-6208.
[3] SRIKANTH A, UMASANKAR A S, THANU S, et al. Extractive text summarization using dynamic clustering and co-reference on BERT[C]//Proceedings of the 2020 5th International Conference on Computing, Communication and Security. Piscataway: IEEE, 2020: 1-5.
[4] MA T H, PAN Q, RONG H, et al. T-BERTSum: topic-aware text summarization based on BERT[J]. IEEE Transactions on Computational Social Systems, 2022, 9(3): 879-890.
[5] MA T H, PAN Q, WANG H M, et al. Graph classification algorithm based on graph structure embedding[J]. Expert Systems with Applications, 2020, 161: 113715.
[6] 李志欣, 彭智, 唐素勤, 等. 融合上下文信息和关键信息的文本摘要[J]. 中文信息学报, 2022, 36(1): 83-91.
LI Z X, PENG Z, TANG S Q, et al. Fusing context information and key information for text summarization[J]. Journal of Chinese Information Processing, 2022, 36(1): 83-91.
[7] WANG S L, CHE W X, LIU Q, et al. Multi-task self-supervised learning for disfluency detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(5): 9193-9200.
[8] RUSH A M, CHOPRA S, WESTON J. A neural attention model for abstractive sentence summarization[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2015: 379-389.
[9] NALLAPATI R, ZHOU B W, DOS SANTOS C, et al. Abstractive text summarization using sequence-to-sequence RNNs and beyond[C]//Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. Stroudsburg: ACL, 2016: 280-290.
[10] ZHANG H Y, CAI J J, XU J J, et al. Pretraining-based natural language generation for text summarization[C]//Proceedings of the 23rd Conference on Computational Natural Language Learning. Stroudsburg: ACL, 2019: 789-797.
[11] WANG Q C, LIU P Y, ZHU Z F, et al. A text abstraction summary model based on BERT word embedding and reinforcement learning[J]. Applied Sciences, 2019, 9(21): 4701.
[12] CAO Z Q, WEI F R, LI W J, et al. Faithful to the original: fact aware neural abstractive summarization[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018: 4784-4791.
[13] GOMEZ-PEREZ J M, PAN J Z, VETERE G, et al. Enterprise knowledge graph: an introduction[M]//Exploiting linked data and knowledge graphs in large organisations. Cham: Springer, 2017: 1-14.
[14] KONCEL-KEDZIORSKI R, BEKAL D, LUAN Y, et al. Text generation from knowledge graphs with graph transformers[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers), 2019: 2284-2293.
[15] ZHU C G, HINTHORN W, XU R C, et al. Enhancing factual consistency of abstractive summarization[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 718-733.
[16] FERNANDES P, ALLAMANIS M, BROCKSCHMIDT M. Structured neural summarization[C]//Proceedings of the International Conference on Learning Representations, 2018.
[17] HUANG L Y, WU L F, WANG L. Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 5094-5107.
[18] LIU L Q, LU Y, YANG M, et al. Generative adversarial network for abstractive text summarization[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018: 8109-8110.
[19] CHEN Y C, BANSAL M. Fast abstractive summarization with reinforce-selected sentence rewriting[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2018: 675-686.
[20] PAULUS R, XIONG C, SOCHER R. A deep reinforced model for abstractive summarization[C]//Proceedings of the International Conference on Learning Representations, 2018: 1-12.
[21] ANGELI G, PREMKUMAR M J J, MANNING C D. Leveraging linguistic structure for open domain information extraction[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2015: 344-354.
[22] MANNING C, SURDEANU M, BAUER J, et al. The stanford CoreNLP natural language processing toolkit[C]//Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Stroudsburg: ACL, 2014: 55-60.
[23] VELICKOVIC P, CUCURULL G, CASANOVA A, et al. Graph attention networks[C]//Proceedings of the International Conference on Learning Representations, 2018.
[24] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6000-6010.
[25] VINYALS O, FORTUNATO M, JAITLY N. Pointer networks[C]//Advances in Neural Information Processing Systems, 2015.
[26] HERMANN K M, KO?ISKY T, GREFENSTETTE E, et al. Teaching machines to read and comprehend[C]//Advances in Neural Information Processing Systems, 2015.
[27] NARAYAN S, COHEN S B, LAPATA M. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 1797-1807.
[28] LI W, XIAO X Y, LIU J C, et al. Leveraging graph to improve abstractive multi-document summarization[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 6232-6243.
[29] NALLAPATI R, ZHAI F F, ZHOU B W. SummaRuNNer: a recurrent neural network based sequence model for extractive summarization of documents[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017: 3075-3081.
[30] SANKARAN B, MI H T, AL-ONAIZAN Y, et al. Temporal attention model for neural machine translation[J]. arXiv: 1608.02927, 2016.
[31] GEHRMANN S, DENG Y T, RUSH A. Bottom-up abstractive summarization[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 4098-4109.
[32] CELIKYILMAZ A, BOSSELUT A, HE X D, et al. Deep communicating agents for abstractive summarization[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1. Stroudsburg: ACL, 2018: 1662-1675.
[33] SONG K, TAN X, QIN T, et al. Mass: masked sequence to sequence pre-training for language generation[C]//Proceedings of the 2019 Conference on Machine Learning, 2019: 5926-5936.
[34] ZHANG J, ZHAO Y, SALEH M, et al. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization[C]//Proceedings of the International Conference on Machine Learning, 2020: 11328-11339.
[35] DOU Z Y, LIU P F, HAYASHI H, et al. GSum: a general framework for guided neural abstractive summarization[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 4830-4842.
[36] JI X, ZHAO W. SKGSUM: abstractive document summarization with semantic knowledge graphs[C]//Proceedings of the 2021 International Joint Conference on Neural Networks, 2021: 1-8. |