
Computer Engineering and Applications ›› 2025, Vol. 61 ›› Issue (12): 129-140.DOI: 10.3778/j.issn.1002-8331.2403-0058
• Pattern Recognition and Artificial Intelligence • Previous Articles Next Articles
MENG Xiangzhong, XIA Hongbin, LIU Yuan
Online:2025-06-15
Published:2025-06-13
孟祥仲,夏鸿斌,刘渊
MENG Xiangzhong, XIA Hongbin, LIU Yuan. Controllable Story Generation with Adaptive Knowledge Enhancement[J]. Computer Engineering and Applications, 2025, 61(12): 129-140.
孟祥仲, 夏鸿斌, 刘渊. 自适应知识增强的可控故事生成模型[J]. 计算机工程与应用, 2025, 61(12): 129-140.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2403-0058
| [1] OTTER D W, MEDINA J R, KALITA J K. A survey of the usages of deep learning for natural language processing[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(2): 604-624. [2] 桂韬, 奚志恒, 郑锐, 等. 基于深度学习的自然语言处理鲁棒性研究综述[J]. 计算机学报, 2024, 47(1): 90-112. GUI T, XI Z H, ZHENG R, et al. Recent researches of robustness in natural language processing based on deep neural network[J]. Chinese Journal of Computers, 2024, 47(1): 90-112. [3] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 9-10. [4] LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 7871-7880. [5] JIN H Q, CAO Y, WANG T M, et al. Recent advances of neural text generation: core tasks, datasets, models and challenges[J]. Science China Technological Sciences, 2020, 63(10): 1990-2010. [6] TANG C, LIN C H, HUANG H L, et al. EtriCA: event-triggered context-aware story generation augmented by cross attention[C]//Findings of the Association for Computational Linguistics: EMNLP 2022. Stroudsburg: ACL, 2022: 5504-5518. [7] TANG C, LOAKMAN T, LIN C H. A cross-attention augmented model for event-triggered context-aware story generation[J]. Computer Speech & Language, 2024, 88: 101662. [8] ZHU Y K, KIROS R, ZEMEL R, et al. Aligning books and movies: towards story-like visual explanations by watching movies and reading books[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 19-27. [9] HUANG L F, HUANG L E. Optimized event storyline generation based on mixture-event-aspect model[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2013: 726-735. [10] ROEMMELE M. Writing stories with help from recurrent neural networks[C]//Proceedings of the 30th AAAI Conference on Artificial Intelligence, 2016. [11] FAN A, LEWIS M, DAUPHIN Y. Hierarchical neural story generation[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2018: 889-898. [12] YAO L L, PENG N Y, WEISCHEDEL R, et al. Plan-and-write: towards better automatic storytelling[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019: 7378-7385. [13] FAN A, LEWIS M, DAUPHIN Y. Strategies for structuring story generation[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 2650-2660. [14] GOLDFARB-TARRANT S, CHAKRABARTY T, WEISCHEDEL R, et al. Content planning for neural story generation with Aristotelian rescoring[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 4319-4338. [15] GUAN J, MAO X X, FAN C J, et al. Long text generation by modeling sentence-level and discourse-level coherence[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 6379-6393. [16] HONG X D, DEMBERG V, SAYEED A, et al. Visual coherence loss for coherent and visually grounded story generation[C]//Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg: ACL, 2023: 9456-9470. [17] CHEN Y T, LI R H, SHI B W, et al. Visual story generation based on emotion and keywords[J]. arXiv:2301.02777, 2023. [18] BRAHMAN F, CHATURVEDI S. Modeling protagonist emotions for emotion-aware storytelling[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 5277-5294. [19] KONG X Z, HUANG J L, TUNG Z, et al. Stylized story generation with style-guided planning[C]//Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg: ACL, 2021: 2430-2436. [20] XIE Y Q, HU Y, LI Y P, et al. Psychology-guided controllable story generation[C]//Proceedings of the 29th International Conference on Computational Linguistics. Stroudsburg: ACL, 2022: 6480-6492. [21] WANG X P, JIANG H, WEI Z H, et al. CHAE: fine-grained controllable story generation with characters, actions and emotions[C]//Proceedings of the 29th International Conference on Computational Linguistics. Stroudsbur: ACL, 2022: 6426-6435. [22] SASAZAWA Y, MORISHITA T, OZAKI H, et al. Controlling keywords and their positions in text generation[C]//Proceedings of the 16th International Natural Language Generation Conference. Stroudsburg: ACL, 2023. [23] GUAN J, HUANG F, ZHAO Z H, et al. A knowledge-enhanced pretraining model for commonsense story generation[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 93-108. [24] XU P, PATWARY M, SHOEYBI M, et al. MEGATRON-CNTRL: controllable story generation with external knowledge using large-scale language models[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 2831-2845. [25] ZHANG Z X, WEN J X, GUAN J, et al. Persona-guided planning for controlling the protagonist’s persona in story generation[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2022: 3346-3361. [26] SAP M, LE BRAS R, ALLAWAY E, et al. ATOMIC: an atlas of machine commonsense for if-then reasoning[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019: 3027-3035. [27] BOSSELUT A, RASHKIN H, SAP M, et al. COMET: commonsense transformers for automatic knowledge graph construction[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 4762-4779. [28] LEE H, HUDSON D A, LEE K, et al. SLM: learning a discourse language representation with sentence unshuffling[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 1551-1562. [29] MOSTAFAZADEH N, CHAMBERS N, HE X D, et al. A corpus and cloze evaluation for deeper understanding of commonsense stories[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2016: 839-849. [30] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg:ACL, 2019: 4171-4186. [31] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21(1): 5485-5551. [32] HOLTZMAN A, BUYS J, DU L, et al. The curious case of neural text degeneration[J]. arXiv:1904.09751, 2019. [33] CHHUN C, COLOMBO P, CLAVEL C, et al. Of human criteria and automatic metrics: A benchmark of the evaluation of story generation[C]//Proceedings of the 29th International Conference on Computational Linguistics. Stroudsburg: ACL, 2022: 5794-5836. [34] ZHANG T Y, KISHORRE V, WU F, et al. BERTScore: evaluating text generation with BERT[J]. arXiv:1904.09675, 2019. [35] LI J W, GALLEY M, BROCKETT C, et al. A diversity-promoting objective function for neural conversation models[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2016: 110-119. [36] XU X N, DU?EK O, KONSTAS I, et al. Better conversations by modeling, filtering, and optimizing for coherence and diversity[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 3981-3991. [37] CHEN H, MINH VO D, TAKAMURA H, et al. StoryER: automatic story evaluation via ranking, rating and reasoning[J]. Journal of Natural Language Processing, 2023, 30(1): 243-249. [38] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1532-1543. |
| [1] | SHI Zhanglong, ZHOU Xi, WANG Zhen, MA Bo, YANG Yating. Event Argument Extraction via Multi-Task Enhanced Text Generation [J]. Computer Engineering and Applications, 2025, 61(9): 168-176. |
| [2] | JIANG Kexin, ZHAO Yahui, CUI Rongyi, CHEN Ke. Sentence Matching Method Based on Deep Coding and Knowledge Enhancement [J]. Computer Engineering and Applications, 2025, 61(8): 126-134. |
| [3] | ZHU Jiahui, HAN Ren, ZHANG Sheng, CHEN Sizhou. Enhancing Low-Resource Neural Machine Translation with Compressed Multilingual BERT Knowledge [J]. Computer Engineering and Applications, 2025, 61(8): 163-172. |
| [4] | REN Haiyu, LIU Jianping, WANG Jian, GU Xunxun, CHEN Xi, ZHANG Yue, ZHAO Changxu. Research on Intelligent Question Answering System Based on Large Language Model [J]. Computer Engineering and Applications, 2025, 61(7): 1-24. |
| [5] | WU Ruiqi, ZHOU Yi. Multi-Source Multi-Task Learning with Knowledge Integration for Fundus Disease Classification [J]. Computer Engineering and Applications, 2025, 61(7): 255-266. |
| [6] | XIAO Yu, XIAO Jing, LIN Guijin, NI Rongsen, XIAN Jiarong, YUAN Jibao. Construction and Study of Explainable Logical Reasoning Dataset [J]. Computer Engineering and Applications, 2025, 61(4): 114-121. |
| [7] | WANG Weihang, ZHANG Yi. MLDAC:Multi-Task Dense Attention Computation Self-Supervised Few-Shot Semantic Segmentation Method [J]. Computer Engineering and Applications, 2025, 61(4): 211-221. |
| [8] | WANG Xinlei, WANG Shuo, ZHAI Jiazheng, XIAO Ruilin, LIAO Chenxu. Object Detection Algorithm of Aerial Image in Complex Weather Based on Multi-Task Joint Learning [J]. Computer Engineering and Applications, 2025, 61(2): 97-111. |
| [9] | JI Xinmeng, ZAN Hongying, CUI Tingting, ZHANG Kunli. Status and Challenges of Large Language Models Applications in Vertical Domains [J]. Computer Engineering and Applications, 2025, 61(12): 1-11. |
| [10] | HUANG Shiyang, XI Xuefeng, CUI Zhiming. Research and Exploration on Chinese Natural Language Processing in Era of Large Language Models [J]. Computer Engineering and Applications, 2025, 61(1): 80-97. |
| [11] | CHEN Zhaohong, HONG Zhiyong, YU Wenhua, ZHANG Xin. Extreme Multi-Label Text Classification Based on Balance Function [J]. Computer Engineering and Applications, 2024, 60(4): 163-172. |
| [12] | LI Yajie, TANG Guogen, LI Ping. DPMN:Multi-Task Learning Network for Problem of Overlapping Relation Extraction [J]. Computer Engineering and Applications, 2024, 60(20): 160-167. |
| [13] | WANG Xuemin, BAO Xuguang, CHANG Liang, HAO Yuanjing. Towards Related Background Knowledge Acquisition via Counterfactual [J]. Computer Engineering and Applications, 2024, 60(20): 168-179. |
| [14] | SU Yilei, LI Weijun, LIU Xueyang, DING Jianping, LIU Shixia, LI Haonan, LI Guanfeng. Review of Text Classification Methods Based on Graph Neural Networks [J]. Computer Engineering and Applications, 2024, 60(19): 1-17. |
| [15] | WANG Nan, TAN Shuru, XIE Xiaolan, LI Hairong. Pre-Training Model of Public Opinion Event Vector [J]. Computer Engineering and Applications, 2024, 60(18): 189-197. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||