[1] XU X, GOU Z, WU W, et al. Long time no see! Open-domain conversation with long-term persona memory[C]//Proceedings of the Findings of the Association for Computational Linguistics (ACL 2022), 2022: 2639-2650.
[2] SONG H, WANG Y, ZHANG W, et al. Generate, delete and rewrite: a three-stage framework for improving persona consistency of dialogue generation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 5821-5831.
[3] NI J, YOUNG T, PANDELEA V, et al. Recent advances in deep learning based dialogue systems: a systematic survey[J]. Artificial Intelligence Review, 2023, 56(4): 3055-3155.
[4] ZHANG S, DINAN E, URBANEK J, et al. Personalizing dialogue agents: I have a dog, do you have pets too?[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018: 2204-2213.
[5] WELLECK S, WESTON J, SZLAM A, et al. Dialogue natural language inference[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 3731-3741.
[6] LIU Q, CHEN Y, CHEN B, et al. You impress me: dialogue generation via mutual persona perception[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 1417-1427.
[7] SONG H, WANG Y, ZHANG K, et al. BoB: BERT over BERT for training persona-based dialogue models from limited personalized data[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021: 167-177.
[8] CAO Y, BI W, FANG M, et al. A model-agnostic data manipulation method for persona-based dialogue generation[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 7984-8002.
[9] STENT A, MARGE M, SINGHAI M. Evaluating evaluation methods for generation in the presence of variation[C]//International Conference on Intelligent Text Processing and Computational Linguistics, 2005: 341-351.
[10] CAMPOS J, KENNEDY J, LEHMAN J F. Challenges in exploiting conversational memory in human-agent interaction[C]//Proceedings of the 17th International Conference on Autonomous Agents and Multiagent System, 2018: 1649-1657.
[11] LI C, GAO X, LI Y, et al. Optimus: organizing sentences via pre-trained modeling of a latent space[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020: 4678-4699.
[12] BAO S, HE H, WANG F, et al. PLATO: pre-trained dialogue generation model with discrete latent variable[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 85-96.
[13] CHEN W, GONG Y, WANG S, et al. DialogVED: a pre-trained latent variable encoder-decoder model for dialog response generation[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 4852-4864.
[14] XIE Q, DAI Z, HOVY E, et al. Unsupervised data augmentation for consistency training[C]//Advances in Neural Information Processing Systems, 2020: 6256-6268.
[15] SAP M, LE BRAS R, ALLAWAY E, et al. Atomic: an atlas of machine commonsense for if-then reasoning[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 3027-3035.
[16] BOSSELUT A, RASHKIN H, SAP M, et al. COMET: commonsense transformers for automatic knowledge graph construction[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 4762-4779.
[17] ZHAO T, ZHAO R, ESKENAZI M. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 654-664.
[18] WOLF T, SANH V, CHAUMOND J, et al. Transfertransfo: a transfer learning approach for neural network based conversational agents[J]. arXiv:1901.08149, 2019.
[19] LEWIS M, LIU Y, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 7871-7880.
[20] LOSHCHILOV I, HUTTER F. Decoupled weight decay regularization[J]. arXiv:1711.05101, 2017.
[21] SONG H, ZHANG W N, CUI Y, et al. Exploiting persona information for diverse generation of conversational responses[J]. arXiv:1905.12188, 2019.
[22] LIAN R, XIE M, WANG F, et al. Learning to select knowledge for response generation in dialog systems[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019: 5081.
[23] BAHL L R, JELINEK F, MERCER R L. A maximum likelihood approach to continuous speech recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1983(2): 179-190.
[24] PAPINENI K, ROUKOS S, WARD T, et al. Bleu: a method for automatic evaluation of machine translation[C]//Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002: 311-318.
[25] LI J, GALLEY M, BROCKETT C, et al. A diversity-promoting objective function for neural conversation models[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 110-119.
[26] MADOTTO A, LIN Z, WU C S, et al. Personalizing dialogue agents via meta-learning[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 5454-5459.
[27] ZUEHLKE D, GEWENIGER T, HEIMANN U, et al. Fuzzy fleiss-kappa for comparison of fuzzy classifiers[C]//Proceedinds of 17th European Symposium on Artificial Neural Networks, Bruges, Belgium, April 22-24, 2009. |