[1] 皮洲, 奚雪峰, 崔志明, 等.一种面向长文本小数据集自动摘要任务的数据增强策略[J].中文信息学报, 2022, 36(9): 46-56.
PI Z, Ⅺ X F, CUI Z M, et al. A data augmentation method for long text automatic summarization[J].Journal of Chinese Information Processing, 2022, 36(9): 46-56.
[2] 黄于欣, 余正涛, 郭军军, 等.基于主题交互图的案件话题摘要[J].软件学报, 2023, 34(4): 1796-1810.
HUANG Y X, YU Z T, GUO J J, et al.Case-related topic summarization based on topic interaction graph[J]. Journal of Software, 2023, 34(4): 1796-1810.
[3] 李刚, 余正涛, 黄于欣.案件要素异构图的舆情新闻抽取式摘要[J].计算机工程与应用, 2023, 59(4): 112-119.
LI G, YU Z T, HUANG Y X. Extractive summary for public opinion news via case elements heterogeneous graph[J]. Computer Engineering and Applications, 2023, 59(4): 112-119.
[4] 余帅, 宋玉梅, 秦永彬, 等.基于审判逻辑步骤的裁判文书摘要生成方法[J].计算机工程与应用, 2024, 60(4): 113-121.
YU S, SONG Y M, QIN Y B, et al. Method for generating summary of judgment documents based on trial logic steps[J]. Computer Engineering and Applications, 2024, 60(4): 113-121.
[5] SUTSKEVER I, VINYALS O, LE Q V. Sequence to sequence learning with neural networks[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014: 3104-3112.
[6] REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using siamese BERT-networks[J]. arXiv:1908.
10084, 2019.
[7] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[8] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6000-6010.
[9] CER D, DIAB M, AGIRRE E, et al. SemEval-2017 task 1: semantic textual similarity-multilingual and cross-lingual focused evaluation[J]. arXiv:1708.00055, 2017.
[10] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019.
[11] KIROS R, ZHU Y, SALAKHUTDINOV R R, et al. Skip-thought vectors[J]. arXiv:1506.06726, 2015.
[12] WILLIAMS A, NANGIA N, BOWMAN S R.A broad-coverage challenge corpus for sentence understanding through inference[J]. arXiv:1704.05426, 2017.
[13] HILL F, CHO K, KORHONEN A. Learning distributed representations of sentences from unlabelled data[J]. arXiv: 1602.03483, 2016.
[14] HUMEAU S, SHUSTER K, LACHAUX M A, et al. Poly-encoders: transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring[J]. arXiv: 1905.01969, 2019.
[15] MCKEOWN K, RADEV D R. Generating summaries of multiple news articles[C]//Proceedings of the 18th Annual International Conference on Research and Development in Information Retrieval, 1995: 74-82.
[16] CHOPRA S, AULI M, RUSH A M. Abstractive sentence summarization with attentive recurrent neural networks[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 93-98.
[17] LOPYREV K. Generating news headlines with recurrent neural networks[J]. arXiv: 1512.01712, 2015.
[18] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[J].arXiv:1409.0473, 2014.
[19] LEWIS M, LIU Y, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[J]. arXiv:1910.13461, 2019.
[20] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485-5551.
[21] LI W, GAO C, NIU G, et al.UNIMO: towards unified-modal understanding and generation via cross-modal contrastive learning[J]. arXiv:2012.15409, 2020.
[22] BAKRE M, SUN P, SHEN X, et al. HRS-bench: holistic, reliable and scalable benchmark for text-to-image models[J]. arXiv:2304.05390, 2023.
[23] 王宗辉, 李宝安, 吕学强, 等. BETES: 一种中文长文档抽取式摘要方法[J].小型微型计算机系统, 2022, 43(1): 42-49.
WANG Z H, LI B A, LV X Q, et al. Method of extractive summarization Chinese long documents[J].Journal of Chinese Computer Systems, 2022, 43(1): 42-49.
[24] 张乐, 冷基栋, 吕学强, 等. RLCPAR: 一种基于强化学习的中文专利摘要改写模型[J].数据分析与知识发现, 2021, 5(7): 59-69.
ZHANG L, LENG J D, LV X Q, et al. RLCPAR: a rewriting model for Chinese patent abstracts based on reinforcement learning[J]. Data Analysis and Knowledge Discovery, 2021, 5(7): 59-69.
[25] HU B, CHEN Q, ZHU F. LCSTS: a large scale Chinese short text summarization dataset[J]. arXiv:1506.05865, 2015.
[26] GAO S, CHEN X, LI P, et al. Abstractive text summarization by incorporating reader comments[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 6399-6406.
[27] WANG D, CHEN J, WU X, et al. CNewSum: a large-scale Chinese news summarization dataset with human-annotated adequacy and deducibility level[J]. arXiv: 2110.10874, 2021.
[28] LIN C Y. ROUGE: a package for automatic evaluation of summaries[C]//Proceedings of the Worlcshop on Text Summarization Branches Out, 2004: 74-81.
[29] LIU Y, LIU P, RADEV D, et al. BRIO: bringing order to abstractive summarization[J]. arXiv: 2203.16804, 2022.
[30] 张乐, 杜一凡, 吕学强, 等. STNLTP: 一种基于集成策略的中文专利摘要生成模型[J].数据分析与知识发现, 2022, 6(7): 107-117.
ZHANG L, DU Y F, LV X Q, et al. STNLTP: generating Chinese patent abstracts based on integrated strategy[J]. Data Analysis and Knowledge Discovery, 2022, 6(7): 107-117. |