[1] 李金鹏, 张闯, 陈小军, 等. 自动文本摘要研究综述[J]. 计算机研究与发展, 2021, 58(1): 1-21.
LI J P, ZHANG C, CHEN X J, et al. Survey on automatic text summarization[J]. Journal of Computer Research and Development, 2021, 58(1): 1-21.
[2] EL-KASSAS W S, SALAMA C R, RAFEA A, et al. Automatic text summarization: a comprehensive survey[J]. Expert Systems with Applications, 2021, 165: 113679-113725.
[3] 徐月梅, 胡玲, 赵佳艺, 等. 大语言模型的技术应用前景与风险挑战[J]. 计算机应用, 2024, 44(6): 1655-1662.
XU Y M, HU L, ZHAO J Y, et al. Technology application prospects and risk challenges of large language model[J]. Journal of Computer Applications, 2024, 44(6): 1655-1662.
[4] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017: 30-41.
[5] LIU P, YUAN W, FU J, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35.
[6] PFEIFFER J, KAMATH A, RüCKLE A, et al. Adapter Fusion: non-destructive task composition for transfer learning[J]. arXiv:2005.00247, 2020.
[7] PRUKSACHATKUN Y, PHANG J, LIU H, et al. Intermediate-task transfer learning with pretrained language models: when and why does it work?[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 5231-5247.
[8] MIRZADEH S I, CHAUDHRY A, YIN D, et al. Wide neural networks forget less catastrophically[C]//Proceedings of the International Conference on Machine Learning, 2022: 15699-15717.
[9] 李凡长, 刘洋, 吴鹏翔, 等. 元学习研究综述[J]. 计算机学报, 2021, 44(2): 422-446.
LI F C, LIU Y, WU P X, et al. A survey on recent advances in meta-learning[J]. Chinese Journal of Computers, 2021, 44(2): 422-446.
[10] 李庚松, 刘艺, 秦伟, 等. 面向算法选择的元学习研究综述[J]. 计算机科学与探索, 2023, 17(1): 88-107.
LI G S, LIU Y, QIN W, et al. Survey on meta-learning research of algorithm selection[J]. Journal of Frontiers of Computer Science & Technology, 2023, 17(1): 88-107.
[11] WANG J, LAN C, LIU C, et al. Generalizing to unseen domains: a survey on domain generalization[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 35(8): 8052-8072.
[12] RUSH A M, CHOPRA S, WESTON J. A neural attention model for abstractive sentence summarization[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015: 379-389.
[13] LIU Y, LAPATA M. Text summarization with pretrained encoders[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 3730-3740.
[14] ZHANG J, ZHAO Y, SALEH M, et al. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 11328-11339.
[15] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485-5551.
[16] HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter-efficient transfer learning for NLP[C]//Proceedings of the International Conference on Machine Learning, 2019: 2790-2799.
[17] 赵凯琳, 靳小龙, 王元卓. 小样本学习研究综述[J]. 软件学报, 2021, 32(2): 349-369.
ZHAO K L, JIN X L, WANG Y Z. Survey on few-shot learning[J]. Journal of Software, 2021, 32(2): 349-369.
[18] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 9.
[19] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning, 2017: 1126-1135.
[20] NICHOL A, ACHIAM J, SCHULMAN J. On first-order meta-learning algorithms[J]. arXiv.1803.02999, 2018.
[21] QIAN K, YU Z. Domain adaptive dialog generation viameta learning[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 2639-2649.
[22] LEE H Y, LI S W, VU T. Meta learning for natural language processing: a survey[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 666-684.
[23] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186.
[24] GUAN W, SMETANNIKOV I, TIANXING M. Survey on automatic text summarization and transformer models applicability[C]//Proceedings of the 2020 1st International Conference on Control, Robotics and Intelligent System, 2020: 176-184.
[25] NI J, LI J, MCAULEY J. Justifying recommendations using distantly-labeled reviews and fine-grained aspects[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 188-197.
[26] AO X, WANG X, LUO L, et al. PENS: a dataset and generic framework for personalized news headline generation[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 82-92.
[27] LIN C Y. ROUGE: a package for automatic evaluation of summaries[J]. Text Summarization Branches Out, 2004(7): 74-81.
[28] PAPINENI K, ROUKOS S, WARD T, et al. BLEU: a method for automatic evaluation of machine translation[C]//Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002: 311-318.
[29] ZANDIE D, MAHOOR M H. Topical language generation using transformers[J]. Natural Language Engineering, 2023, 29(2): 337-359.
[30] ZHAO T, LI G, SONG Y, et al. A multi-scenario text generation method based on meta reinforcement learning[J]. Pattern Recognition Letters, 2023, 165: 47-54.
[31] CHEN Y S, SONG Y Z, SHUAI H H. SPEC: summary preference decomposition for low-resource abstractive summarization[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022, 31(6): 603-618. |