[1] OTTER D W, MEDINA J R, KALITA J K. A survey of the usages of deep learning for natural language processing[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(2): 604-624.
[2] 桂韬, 奚志恒, 郑锐, 等. 基于深度学习的自然语言处理鲁棒性研究综述[J]. 计算机学报, 2024, 47(1): 90-112.
GUI T, XI Z H, ZHENG R, et al. Recent researches of robustness in natural language processing based on deep neural network[J]. Chinese Journal of Computers, 2024, 47(1): 90-112.
[3] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 9-10.
[4] LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 7871-7880.
[5] JIN H Q, CAO Y, WANG T M, et al. Recent advances of neural text generation: core tasks, datasets, models and challenges[J]. Science China Technological Sciences, 2020, 63(10): 1990-2010.
[6] TANG C, LIN C H, HUANG H L, et al. EtriCA: event-triggered context-aware story generation augmented by cross attention[C]//Findings of the Association for Computational Linguistics: EMNLP 2022. Stroudsburg: ACL, 2022: 5504-5518.
[7] TANG C, LOAKMAN T, LIN C H. A cross-attention augmented model for event-triggered context-aware story generation[J]. Computer Speech & Language, 2024, 88: 101662.
[8] ZHU Y K, KIROS R, ZEMEL R, et al. Aligning books and movies: towards story-like visual explanations by watching movies and reading books[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 19-27.
[9] HUANG L F, HUANG L E. Optimized event storyline generation based on mixture-event-aspect model[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2013: 726-735.
[10] ROEMMELE M. Writing stories with help from recurrent neural networks[C]//Proceedings of the 30th AAAI Conference on Artificial Intelligence, 2016.
[11] FAN A, LEWIS M, DAUPHIN Y. Hierarchical neural story generation[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2018: 889-898.
[12] YAO L L, PENG N Y, WEISCHEDEL R, et al. Plan-and-write: towards better automatic storytelling[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019: 7378-7385.
[13] FAN A, LEWIS M, DAUPHIN Y. Strategies for structuring story generation[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 2650-2660.
[14] GOLDFARB-TARRANT S, CHAKRABARTY T, WEISCHEDEL R, et al. Content planning for neural story generation with Aristotelian rescoring[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 4319-4338.
[15] GUAN J, MAO X X, FAN C J, et al. Long text generation by modeling sentence-level and discourse-level coherence[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 6379-6393.
[16] HONG X D, DEMBERG V, SAYEED A, et al. Visual coherence loss for coherent and visually grounded story generation[C]//Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg: ACL, 2023: 9456-9470.
[17] CHEN Y T, LI R H, SHI B W, et al. Visual story generation based on emotion and keywords[J]. arXiv:2301.02777, 2023.
[18] BRAHMAN F, CHATURVEDI S. Modeling protagonist emotions for emotion-aware storytelling[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 5277-5294.
[19] KONG X Z, HUANG J L, TUNG Z, et al. Stylized story generation with style-guided planning[C]//Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg: ACL, 2021: 2430-2436.
[20] XIE Y Q, HU Y, LI Y P, et al. Psychology-guided controllable story generation[C]//Proceedings of the 29th International Conference on Computational Linguistics. Stroudsburg: ACL, 2022: 6480-6492.
[21] WANG X P, JIANG H, WEI Z H, et al. CHAE: fine-grained controllable story generation with characters, actions and emotions[C]//Proceedings of the 29th International Conference on Computational Linguistics. Stroudsbur: ACL, 2022: 6426-6435.
[22] SASAZAWA Y, MORISHITA T, OZAKI H, et al. Controlling keywords and their positions in text generation[C]//Proceedings of the 16th International Natural Language Generation Conference. Stroudsburg: ACL, 2023.
[23] GUAN J, HUANG F, ZHAO Z H, et al. A knowledge-enhanced pretraining model for commonsense story generation[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 93-108.
[24] XU P, PATWARY M, SHOEYBI M, et al. MEGATRON-CNTRL: controllable story generation with external knowledge using large-scale language models[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 2831-2845.
[25] ZHANG Z X, WEN J X, GUAN J, et al. Persona-guided planning for controlling the protagonist’s persona in story generation[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2022: 3346-3361.
[26] SAP M, LE BRAS R, ALLAWAY E, et al. ATOMIC: an atlas of machine commonsense for if-then reasoning[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019: 3027-3035.
[27] BOSSELUT A, RASHKIN H, SAP M, et al. COMET: commonsense transformers for automatic knowledge graph construction[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 4762-4779.
[28] LEE H, HUDSON D A, LEE K, et al. SLM: learning a discourse language representation with sentence unshuffling[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 1551-1562.
[29] MOSTAFAZADEH N, CHAMBERS N, HE X D, et al. A corpus and cloze evaluation for deeper understanding of commonsense stories[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2016: 839-849.
[30] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg:ACL, 2019: 4171-4186.
[31] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21(1): 5485-5551.
[32] HOLTZMAN A, BUYS J, DU L, et al. The curious case of neural text degeneration[J]. arXiv:1904.09751, 2019.
[33] CHHUN C, COLOMBO P, CLAVEL C, et al. Of human criteria and automatic metrics: A benchmark of the evaluation of story generation[C]//Proceedings of the 29th International Conference on Computational Linguistics. Stroudsburg: ACL, 2022: 5794-5836.
[34] ZHANG T Y, KISHORRE V, WU F, et al. BERTScore: evaluating text generation with BERT[J]. arXiv:1904.09675, 2019.
[35] LI J W, GALLEY M, BROCKETT C, et al. A diversity-promoting objective function for neural conversation models[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2016: 110-119.
[36] XU X N, DU?EK O, KONSTAS I, et al. Better conversations by modeling, filtering, and optimizing for coherence and diversity[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 3981-3991.
[37] CHEN H, MINH VO D, TAKAMURA H, et al. StoryER: automatic story evaluation via ranking, rating and reasoning[J]. Journal of Natural Language Processing, 2023, 30(1): 243-249.
[38] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1532-1543. |