[1] 杨生文. 基于深度学习的阅读理解题目生成研究[D]. 武汉: 华中师范大学, 2021.
YANG S W. Research on generation of reading comprehension subjects based on deep learning[D]. Wuhan: Central China Normal University, 2021.
[2] 帅鹏举. 基于序列到序列的问题及干扰项生成方法研究[D]. 重庆: 西南大学, 2022.
SHUAI P J. Research on sequence-to-sequence based questionand distractor generation methods[D]. Chongqing: Southwest University, 2022.
[3] JIA X, ZHOU W, SUN X, et al. EQG-RACE: examination-type question generation[C]//Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), 2021: 13143-13151.
[4] PAN Y C, HU B T, WANG S Y, et al. Learning to generate complex question with intent prediction from long passage[J]. Applied Intelligence, 2023, 53(5): 5823-5833.
[5] XU J, SUN Y, GAN J H, et al. Leveraging structured information from a passage to generate questions[J]. Tsinghua Science and Technology, 2023, 28(3): 464-474.
[6] LAI G K, XIE Q Z, LIU H X, et al. RACE: large-scale reading comprehension dataset from examinations[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2017: 785-794.
[7] ZYRIANOVA M, KALPAKCHI D, BOYE J. EMBRACE: evaluation and modifications for boosting RACE[J]. arXiv:2305.08433, 2023.
[8] RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD: 100,000+ questions for machine comprehension of text[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2016: 2383-2392.
[9] YANG Z L, QI P, ZHANG S Z, et al. HotpotQA: a dataset for diverse, explainable multi-hop question answering[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 2369-2380.
[10] TRISCHLER A, WANG T, YUAN X D, et al. NewsQA: a machine comprehension dataset[C]//Proceedings of the 2nd Workshop on Representation Learning for NLP. Stroudsburg: ACL, 2017: 191-200.
[11] LIANG Y, LI J, YIN J. A new multi-choice reading comprehension dataset for curriculum learning[C]//Proceedings of the Asian Conference on Machine Learning, 2019: 742-757.
[12] YU W H, JIANG Z H, DONG Y F, et al. ReClor: a reading comprehension dataset requiring logical reasoning[J]. arXiv:2002.04326, 2020.
[13] WELBL J, LIU N F, GARDNER M. Crowdsourcing multiple choice science questions[C]//Proceedings of the 3rd Workshop on Noisy User-generated Text. Stroudsburg: ACL, 2017: 94-106.
[14] JAUHAR S K, TURNEY P, HOVY E. TabMCQ: a dataset of general knowledge tables and multiple-choice questions[J]. arXiv:1602.03960, 2016.
[15] HADIFAR A, BITEW S K, DELEU J, et al. EduQG: a multi-format multiple-choice dataset for the educational domain[J]. IEEE Access, 2023, 11: 20885-20896.
[16] DU X Y, CARDIE C. Identifying where to focus in reading comprehension for NeuralQuestion generation[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2017: 2067-2073.
[17] DUAN N, TANG D Y, CHEN P, et al. Question generation for question answering[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2017: 866-874.
[18] SUN X W, LIU J, LYU Y J, et al. Answer-focused and position-aware neural question generation[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 3930-3939.
[19] GAO Y F, BING L D, LI P J, et al. Generating distractors for reading comprehension questions from real examinations[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 6423-6430.
[20] SONG L F, WANG Z G, HAMZA W, et al. Leveraging context information for natural question generation[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2. Stroudsburg: ACL, 2018: 569-574.
[21] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[22] DEVLIN J, CHANG M, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language under-standing[C]///Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019: 4171-4186.
[23] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[C]//Proceedings of the International Conference on Learning Representations, 2016.
[24] VELI?KOVI? P, CUCURULL G, CASANOVA A, et al. Graph Attention Networks[C]//Proceedings of the International Conference on Learning Representations, 2018.
[25] SHUAI P J, LI L, LIU S S, et al. QDG: a unified model for automatic question-distractor pairs generation[J]. Applied Intelligence, 2023, 53(7): 8275-8285.
[26] SACHAN M, XING E. Self-training for jointly learning to ask and answer questions[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1. Stroudsburg: ACL, 2018: 629-640.
[27] DU X Y, CARDIE C. Harvesting paragraph-level question-answer pairs from wikipedia[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2018: 1907-1917.
[28] WILLIS A, DAVIS G, RUAN S, et al. Key phrase extraction for generating educational question-answer pairs[C]//Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale. New York: ACM, 2019: 1-10.
[29] QU F Y, JIA X, WU Y F. Asking questions like educational experts: automatically generating question-answer pairs on real-world examination data[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 2583-2593.
[30] QI W Z, YAN Y, GONG Y Y, et al. ProphetNet: predicting future N-gram for sequence-to-sequence pre-training[C]//Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg: ACL, 2020: 2401-2410.
[31] GONZALEZ H, DUGAN L, MILTSAKAKI E, et al. Enhancing human summaries for question-answer generation in education[C]//Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications. Stroudsburg: ACL, 2023: 108-118.
[32] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21(1): 5485?-?5551.
[33] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems, 2020: 1877-1901.
[34] WANG Y C, LI L. Generating question-answer pairs for few-shot learning[C]//Proceedings of the International Conference on Artificial Neural Networks, 2023: 414-425.
[35] ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 technical report[J]. arXiv:2303.08774, 2023.
[36] REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using Siamese BERT-networks[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 3980-3990.
[37] CHEN D Q, BOLTON J, MANNING C D. A thorough examination of the CNN/daily mail reading comprehension task[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2016: 2358-2367.
[38] BIRD S, KLEIN E, LOPER E. Natural language processing with Python: analyzing text with the natural language toolkit[M]. [S.l.]: O’Reilly Media, Inc., 2009.
[39] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Pro-Cessing Systems, 2017.
[40] ZHANG T, KISHORE V, WU F, et al. BERTScore: evaluating text generation with BERT[C]//Proceedings of the International Conference on Learning Representations, 2019.
[41] HOSKING T, RIEDEL S. Evaluating rewards for question generation models[C]//Proceedings of the 2019 Conference of the North. Stroudsburg: ACL, 2019: 2278-2283.
[42] SEE A, LIU P J, MANNING C D. Get to the point: summarization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2017: 1073-1083.
[43] ZHANG S Y, BANSAL M. Addressing semantic drift in question generation for semi-supervised question answering[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 2495-2509.
[44] LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 7871-7880.
[45] CHUNG H W, HOU L, LONGPRE S, et al. Scaling instruction-finetuned language models[J]. arXiv:2210.11416, 2022. |