[1] SEE A, LIU P J, MANNING C D. Get to the point: summarization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1073-1083.
[2] LIU Z, NG A, LEE S, et al. Topic-aware pointer-generator networks for summarizing spoken conversations[C]//2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2019: 814-821.
[3] KRISHNA K, KHOSLA S, BIGHAM J, et al. Generating SOAP notes from doctor-patient conversations using modular summarization techniques[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021: 4958-4972.
[4] VINYALS O, FORTUNATO M, JAITLY N. Pointer networks[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015: 2692-2700.
[5] LIU Y, ZHANG G, YU P, et al. BioCopy: a plug-and-play span copy mechanism in Seq2Seq models[C]//Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing, 2021: 53-57.
[6] MIHALCEA R, TARAU P. TextRank: bringing order into texts[C]//Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004: 404-411.
[7] CAO Z, WEI F, LI D, et al. Ranking with recursive neural networks and its application to multi-document summarization[C]//Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015: 2153-2159.
[8] LIU Y. Fine-tune BERT for extractive summarization[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 3548-3553.
[9] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017:6000-6010.
[10] MANAKUL P, GALES M. Long-span dependencies in transformer-based summarization systems[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 6026-6041.
[11] DONG L, YANG N, WANG W, et al. Unified language model pre-training for natural language understanding and generation[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019: 13042-13054.
[12] GULCEHRE C, AHN S, NALLAPATI R, et al. Pointing the unknown words[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 140-149.
[13] GU J, LU Z, LI H, et al. Incorporating copying mechanism in sequence?to?sequence learning[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016: 1631-1640.
[14] MADOTTO A, WU C S, FUNG P. Mem2Seq: effectively incorporating knowledge bases into end-to-end task-oriented dialog systems[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018: 1468-1478.
[15] LIU Y, LAPATA M. Text summarization with pretrained encoders[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 3730-3740.
[16] JOSHI A, KATARIYA N, AMATRIAIN X, et al. Dr. summarize: global summarization of medical dialogue by exploiting local structures[C]//Findings of the Association for Computational Linguistics (EMNLP 2020), 2020: 3755-3763.
[17] SONG Y, TIAN Y, WANG N, et al. Summarizing medical conversations via identifying important utterances[C]//Proceedings of the 28th International Conference on Computational Linguistics, 2020: 717-729.
[18] 谢诗文.基于TextRank算法的在线问诊多轮对话文本摘要研究[D].武汉: 武汉大学, 2020.
XIE S W. Research on multi-round dialogue text summarization in online medical inquiry based on TextRank algorithm[D].Wuhan: Wuhan University, 2020.
[19] ZHANG L, NEGRINHO R, GHOSH A, et al. Leveraging pretrained models for automatic summarization of doctor-patient conversations[C]//Findings of the Association for Computational Linguistics (EMNLP 2021), 2021: 3693-3712.
[20] 何玉洁. 基于命名实体识别的医学病历自动生成研究与 实现[D]. 银川: 宁夏大学, 2020.
HE Y J. Research and realization of medical case automatic generation based on named entiry recognition[D].Yinchuan: Ningxia University, 2020.
[21] NALLAPATI R, ZHOU B, SANTOS C, et al. Abstractive text summarization using sequence-to-sequence RNNs and beyond[C]//Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, 2016: 280-290.
[22] GUO M, AINSLIE J, UTHUS D, et al. LongT5: efficient text-to-text transformer for long sequences[C]//Findings of the Association for Computational Linguistics (NAACL 2022), 2022: 724-736. |