[1] SUGAWARA S, STENETORP P, INUI K, et al. Assessing the benchmarking capacity of machine reading comprehension datasets[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 8918-8927.
[2] LIU B, WEI H, NIU D, et al. Asking questions the human way: scalable question-answer generation from text corpus[C]//Proceedings of the Web Conference, 2020: 2032-2043.
[3] LI F Y, ZHAO Y H, YANG F F, et al. Incorporating translation quality estimation into Chinese-Korean neural machine translation[C]//Proceedings of the China National Conference on Chinese Computational Linguistics, 2021: 906-915.
[4] 孟金旭, 单鸿涛, 万俊杰, 等. BSLA:改进Siamese-LSTM的文本相似模型[J]. 计算机工程与应用, 2022, 58(23): 178-185.
MENG J X, SHAN H T, WANG J J, et al. BSLA: improved text similarity model for Siamese-LSTM[J]. Computer Engineering and Applications, 2022, 58(23): 178-185.
[5] ROGERS A, GARDNER M, AUGENSTEIN I. QA dataset explosion: a taxonomy of NLP resources for question answering and reading comprehension[J]. ACM Computing Surveys, 2023, 55(10): 1-45.
[6] JIANG K, ZHAO Y, JIN G, et al. KETM: a knowledge-enhanced text matching method[C]//Proceedings of the 2023 International Joint Conference on Neural Networks, 2023: 1-8.
[7] PAN M, PEI Q, LIU Y, et al. SPRF: a semantic pseudo-relevance feedback enhancement for information retrieval via ConceptNet[J]. Knowledge-Based Systems, 2023, 274: 110602.
[8] WANG S, JIANG J. Learning natural language inference with LSTM[C]//Proceedings of the Conference on North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 1442-1451.
[9] CONNEAU A, KIELA D, SCHWENK H, et al. Supervised learning of universal sentence representations from natural language inference data[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2017: 670-680.
[10] REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using siamese BERT-Networks[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 3982-3992.
[11] GAO T, YAO X, CHEN D. SimCSE: simple contrastive learning of sentence embeddings[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021: 6894-6910.
[12] YANG R, ZHANG J, GAO X, et al. Simple and effective text matching with richer alignment features[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 4699-4709.
[13] DENG Y, LI X, ZHANG M, et al. Enhanced distance-aware self-attention and multi-level match for sentence semantic matching[J]. Neurocomputing, 2022, 501: 174-187.
[14] LU X, DENG Y, SUN T, et al. MKPM: multi keyword-pair matching for natural language sentences[J]. Applied Intelligence, 2022, 52(2): 1878-1892.
[15] TANG X, LUO Y, XIONG D, et al. Short text matching model with multiway semantic interaction based on multi-granularity semantic embedding[J]. Applied Intelligence, 2022, 52(13): 15632-15642.
[16] SHEN D, WANG G, WANG W, et al. Baseline needs more love: on simple word-embedding-based models and associated pooling mechanisms[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018: 440-450.
[17] TALMAN A, YLI-JYRA A, TIEDEMANN J. Natural language inference with hierarchical BiLSTM max pooling architecture[J]. arXiv:1808.08762, 2018.
[18] YU C, XUE H, JIANG Y, et al. A simple and efficient text matching model based on deep interaction[J]. Information Processing & Management, 2021, 58(6): 102738.
[19] 姜克鑫, 赵亚慧, 崔荣一. 融合高低层语义信息的自然语言句子匹配方法[J]. 计算机应用研究, 2022, 39(4): 1060-1063.
JIANG K X, ZHAO Y H, CUI R Y. Natural language sentence matching method fusion of high-level and low-level semantic information[J]. Journal of Application Research of Computers, 2022, 39(4): 1060-1063.
[20] ZHANG K, LYU G, WANG L, et al. DRr-Net: dynamic re-read network for sentence semantic matching[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 7442-7449.
[21] 陈岳林, 田文靖, 蔡晓东, 等. 基于密集连接网络和多维特征融合的文本匹配模型[J]. 浙江大学学报 (工学版), 2021, 55(12): 2352-2358.
CHEN Y L, TIAN W J, CAI X D, et al. Text matching model based on dense connection network and multi-dimensional feature fusion[J]. Journal of Zhejiang University (Engineering Edition), 2021, 55(12): 2352-2358.
[22] 胡怡然, 夏芳. 基于自注意力机制与BiLSTM的短文本匹配模型[J]. 武汉科技大学学报, 2023, 46(1): 75-80.
HU Y R, XIA F. Short text matching model based on self-attention mechanism and BiLSTM[J]. Journal of Wuhan University of Science and Technology, 2023, 46(1): 75-80.
[23] ZHANG R, ZHOU Q, WU B, et al. What do questions exactly ask? MFAE: duplicate question identification with multi-fusion asking emphasis[C]//Proceedings of the SIAM International Conference on Data Mining, 2020: 226-234.
[24] LEE K, CHOI G, CHOI C. Use all tokens method to improve semantic relationship learning[J]. Expert Systems with Applications, 2023, 233: 120911.
[25] HU X, LIN L, LIU A, et al. A multi-level supervised contrastive learning framework for low-resource natural language inference[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023, 31: 1771-1783.
[26] KTNTON J D M W C, TOUTANOUA L K. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[27] CHEN Q, ZHU X, LING Z H, et al. Enhanced LSTM for natural language inference[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1657-1668. |