
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (20): 19-35.DOI: 10.3778/j.issn.1002-8331.2501-0061
吴璇,付涛
出版日期:2025-10-15
发布日期:2025-10-15
WU Xuan, FU Tao
Online:2025-10-15
Published:2025-10-15
摘要: 大语言模型在自然语言处理领域表现出强大的能力,但依然面临诸如幻觉、缺乏领域特定知识等问题。检索增强生成(retrieval-augmented generation,RAG)利用大规模的外部知识库来增强模型的语义理解和生成能力,有效缓解了大语言模型所面临的部分问题,为开放域问答、文本摘要、对话系统等自然语言处理任务提供了有效的解决方案。将全面综述检索增强生成的关键技术进展,包括检索器、生成器以及各个部分优化的可能性;总结了现有的检索增强生成评估方法,探讨了当前RAG评估的局限性。最后,讨论了检索增强生成未来可能的研究方向。
吴璇, 付涛. 检索增强生成技术研究综述[J]. 计算机工程与应用, 2025, 61(20): 19-35.
WU Xuan, FU Tao. Comprehensive Review of Retrieval-Augmented Generation[J]. Computer Engineering and Applications, 2025, 61(20): 19-35.
| [1] CUI Y, JIA M L, LIN T Y, et al. Class-balanced loss based on effective number of samples[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 9260-9269. [2] LIU Z W, MIAO Z Q, ZHAN X H, et al. Large-scale long-tailed recognition in an open world[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2532-2541. [3] ZHANG S Y, CHEN C, HU X Y, et al. Balanced knowledge distillation for long-tailed learning[J]. Neurocomputing, 2023, 527: 36-46. [4] MALLEN A, ASAI A, ZHONG V, et al. When not to trust language models: investigating effectiveness of parametric and non-parametric memories[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 9802-9822. [5] FENG Z, MA W, YU W, et al. Trends in integration of knowledge and large language models: a survey and taxonomy of methods, benchmarks, and applications[J]. arXiv:2311.05876, 2023. [6] LIU X, LAI H Y, YU H, et al. WebGLM: towards an efficient web-enhanced question answering system with human preferences[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2023: 4549-4560. [7] HUANG L, YU W J, MA W T, et al. A survey on hallucin-ation in large language models: principles, taxonomy, challenges, and open questions[J]. ACM Transactions on Information Systems, 2025, 43(2): 1-55. [8] ZHAO H Y, CHEN H J, YANG F, et al. Explainability for large language models: a survey[J]. ACM Transactions on Intelligent Systems and Technology, 2024, 15(2): 1-38. [9] CHOUDHARY M, DU X Y. QAEVENT: event extraction as question-answer pairs generation[C]//Findings of the Association for Computational Linguistics: EACL 2024, 2024: 1860-1873. [10] ZHANG C, ZHANG H, SUN Y C, et al. Downstream transformer generation of question-answer pairs with preprocessing and postprocessing pipelines[C]//Proceedings of the 22nd ACM Symposium on Document Engineering. New York: ACM, 2022: 1-8. [11] LING J T, AFZAAL M. Automatic question-answer pairs generation using pre-trained large language models in higher education[J]. Computers and Education: Artificial Intelligence, 2024, 6: 100252. [12] LI H, SU Y, CAI D, et al. A survey on retrieval-augmented text generation[J]. arXiv:2202.01110, 2022. [13] WU S, XIONG Y, CUI Y, et al. Retrieval-augmented gener-ation for natural language processing: a survey[J]. arXiv:2407. 13193, 2024. [14] ZHAO P, ZHANG H, YU Q, et al. Retrieval-augmented generation for AI-generated content: a survey[J]. arXiv:2402. 19473, 2024. [15] YU H, GAN A R, ZHANG K, et al. Evaluation of retrieval-augmented generation: a survey[C]//Proceedings of the CCF Conference on Big Data. Singapore: Springer Nature Singapore, 2025: 102-120. [16] PENG B, ZHU Y, LIU Y, et al. Graph retrieval-augmented generation: a survey[J]. arXiv:2408.08921, 2024. [17] LEWIS P, PEREZ E, PIKTUS A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks[C]//Advances in Neural Information Processing Systems, 2020: 9459-9474. [18] CHEN T, WANG H W, CHEN S H, et al. Dense X retrieval: what retrieval granularity should we use?[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 15159-15177. [19] DUARTE A V, MARQUES J D, GRA?A M, et al. LumberChunker: long-form narrative document segmentation[C]//Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg: ACL, 2024: 6473-6486. [20] ZHAO J, JI Z, QI P, et al. Meta-Chunking: learning efficient text segmentation via logical perception[J]. arXiv:2410.12788, 2024. [21] 文森, 钱力, 胡懋地, 等. 基于大语言模型的问答技术研究进展综述[J]. 数据分析与知识发现, 2024, 8(6): 16-29. WEN S, QIAN L, HU M D, et al. Review of research progress on question-answering techniques based on large language models[J]. Data Analysis and Knowledge Discovery, 2024, 8(6): 16-29. [22] 赵悦阳, 崔雷. 文本嵌入技术的研究与应用进展[J]. 数据与计算发展前沿, 2023, 5(3): 92-110. ZHAO Y Y, CUI L. Progress in research and application of text embedding technology[J]. Frontiers of Data & Computing, 2023, 5(3): 92-110. [23] HARRIS Z S. Distributional structure[M]. Cham: Springer, 1954. [24] BENGIO Y, DUCHARME R, VINCENT P, et al. A neural probabilistic language model[J]. Journal of Machine Learning Research, 2003, 3: 1137-1155. [25] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2013: 3111-3119. [26] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1532-1543. [27] BOJANOWSKI P, GRAVE E, JOULIN A, et al. Enriching word vectors with subword information[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 135-146. [28] XIAO S T, LIU Z, ZHANG P T, et al. C-pack: packed resources for general Chinese embeddings[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 641-649. [29] WANG L, YANG N, HUANG X, et al. Text embeddings by weakly-supervised contrastive pre-training[J]. arXiv:2212.03533, 2022. [30] ILIN I. Advanced rag techniques: an illustrated overview[EB/OL]. (2023-10-17)[2024-12-15]. https://pub.towardsai.net/adva-nced-rag-techniques-an-illustrated-overview-04d193d8fec6. [31] WANG Y, LIPKA N, ROSSI R A, et al. Knowledge graph prompting for multi-document question answering[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 19206-19214. [32] FAYSSE M, SIBILLE H, WU T, et al. ColPali: efficient document retrieval with vision language models[J]. arXiv:2407. 01449, 2024. [33] JEONG S, KIM K, BAEK J, et al. VideoRAG: retrieval-augmented generation over video corpus[J]. arXiv:2501.05874, 2025. [34] CHO J, MAHATA D, IRSOY O, et al. M3docrag: multi-modal retrieval is what you need for multi-page multi-document understanding[J]. arXiv:2411.04952, 2024. [35] SURI M, MATHUR P, DERNONCOURT F, et al. VisDoM: multi-document QA with visually rich elements using multimodal retrieval-augmented generation[J]. arXiv:2412.10704, 2024. [36] SUN Q, FANG Y, WU L, et al. EVA-CLIP: improved training techniques for clip at scale[J]. arXiv:2303.15389, 2023. [37] XU F, SHI W, CHOI E. RECOMP: improving retrieval-augmented LMs with context compression and selective augmentation[C]//Proceedings of the International Conference on Learning Representations, 2024. [38] IZACARD G, GRAVE E. Leveraging passage retrieval with generative models for open domain question answering[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg: ACL, 2021: 874-880. [39] IZACARD G, LEWIS P, LOMELI M, et al. Atlas: few-shot learning with retrieval augmented language models[J]. Journal of Machine Learning Research, 2023, 24(1): 11912-11954. [40] MA X, GONG Y, HE P, et al. Query rewriting for retrieval-augmented large language models[J]. arXiv:2305.14283, 2023. [41] MAO S Y, JIANG Y, CHEN B L, et al. RaFe: ranking feedback improves query rewriting for RAG[C]//Proceedings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg: ACL, 2024: 884-901. [42] GAO L Y, MA X G, LIN J, et al. Precise zero-shot dense retrieval without relevance labels[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 1762-1777. [43] WANG L, YANG N, WEI F R. Query2doc: query expansion with large language models[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 9414-9423. [44] ZHOU D, SCHARLI N, HOU L, et al. Least-to-most prompting enables complex reasoning in large language models[J]. arXiv:2205.10625, 2022. [45] CHAN C M, XU C, YUAN R, et al. RQ-RAG: learning to refine queries for retrieval augmented generation[J]. arXiv: 2404.00610, 2024. [46] ZHENG H S, MISHRA S, CHEN X, et al. Take a step back: evoking reasoning via abstraction in large language models[J]. arXiv:2310.06117, 2023. [47] FINARDI P, AVILA L, CASTALDONI R, et al. The chronicles of RAG: the retriever, the chunk and the generator[J]. arXiv:2401.07883, 2024. [48] SAWARKAR K, MANGAL A, SOLANKI S R. Blended RAG: improving RAG (retriever-augmented generation) accuracy with semantic search and hybrid query-based retrievers[C]//Proceedings of the IEEE 7th International Conference on Multimedia Information Processing and Retrieval. Piscataway: IEEE, 2024: 155-161. [49] ZHANG P, XIAO S, LIU Z, et al. Retrieve anything to augment large language models[J]. arXiv:2310.07554, 2023. [50] SHI W, MIN S, YASUNAGA M, et al. REPLUG: retrieval-augmented black-box language models[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2024: 8371-8384. [51] SIRIWARDHANA S, WEERASEKERA R, WEN E, et al. Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1-17. [52] LIU Z, ZHANG L, LI Q, et al. Invar-RAG: invariant LLM-aligned retrieval for better generation[J]. arXiv:2411.07021, 2024. [53] YANG H Y, LI Z T, ZHANG Y, et al. PRCA: fitting black-box large language models for retrieval question answering via pluggable reward-driven contextual adapter[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 5364-5375. [54] ASAI A, WU Z Q, WANG Y Z, et al. Self-RAG: learning to retrieve, generate, and critique through self-reflection[J]. arXiv:2310.11511, 2023. [55] GUAN X, ZENG J, MENG F, et al. DeepRAG: thinking to retrieval step by step for large language models[J]. arXiv:2502.01142, 2025. [56] JIANG X, FANG Y, QIU R, et al. TC-RAG: turing-complete RAG’s case study on medical LLM systems[J]. arXiv:2408.09199, 2024. [57] RAU D, WANG S, DéJEAN H, et al. Context embeddings for efficient answer generation in rag[J]. arXiv:2407.09252, 2024. [58] SHI K Z, SUN X Y, LI Q, et al. Compressing long context for enhancing RAG with AMR-based concept distillation[J]. arXiv:2405.03085, 2024. [59] CHENG X, WANG X, ZHANG X X, et al. xRAG: extreme context compression for retrieval-augmented generation with one token[C]//Proceedings of the 38th International Conference on Neural Information Processing Systems, 2025: 109487-109516. [60] YU Y, PING W, LIU Z H, et al. RankRAG: unifying context ranking with retrieval-augmented generation in LLMs[J]. arXiv:2407.02485, 2024. [61] GLASS M, ROSSIELLO G, CHOWDHURY M F M, et al. Re2G: retrieve, rerank, generate[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2022: 2701-2715. [62] AMPAZIS N. Improving RAG quality for large language models with topic-enhanced reranking[C]//Proceedings of the Artificial Intelligence Applications and Innovations. Cham: Springer Nature Switzerland, 2024: 74-87. [63] KANG B, KIM J, YUN T R, et al. Prompt-RAG: pioneering vector embedding-free retrieval-augmented generation in niche domains, exemplified by Korean medicine[J]. arXiv:2401.11246, 2024. [64] DONG G T, ZHU Y T, ZHANG C H, et al. Understand what LLM needs: dual preference alignment for retrieval-augmented generation[C]//Proceedings of the ACM on Web Conference, 2025: 4206-4225. [65] LIN X V, CHEN X, CHEN M, et al. RA-DIT: retrieval-augmented dual instruction tuning[J]. arXiv:2310.01352, 2023. [66] WU Z, HU Y, SHI W, et al. Fine-grained human feedback gives better rewards for language model training[C]//Advances in Neural Information Processing Systems, 2023: 59008-59033. [67] CHENG X, LUO D, CHEN X, et al. Lift yourself up: retrieval-augmented text generation with self-memory[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems, 2023: 43780-43799. [68] KHANDELWAL U, LEVY O, JURAFSKY D, et al. Generalization through memorization: nearest neighbor language models[J]. arXiv:1911.00172, 2019. [69] WANG L, CHEN H, YANG N, et al. Chain-of-retrieval augmented generation[J]. arXiv:2501.14342, 2025. [70] 张艳萍, 陈梅芳, 田昌海, 等. 面向军事领域知识问答系统的多策略检索增强生成方法[J]. 计算机应用, 2025, 45(3): 746-754. ZHANG Y P, CHEN M F, TIAN C H, et al. Multi-strategy retrieval-augmented generation method for military domain knowledge question answering systems[J]. Journal of Computer Applications, 2025, 45(3): 746-754. [71] SHAO Z H, GONG Y Y, SHEN Y L, et al. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 9248-9274. [72] LIU N F, LIN K, HEWITT J, et al. Lost in the middle: how language models use long contexts[J]. Transactions of the Association for Computational Linguistics, 2024, 12: 157-173. [73] ZHU Y, WANG Y, MU J Y, et al. Short text classification with soft knowledgeable prompt-tuning[J]. Expert Systems with Applications, 2024, 246: 123248. [74] VU T, LESTER B, CONSTANT N, et al. SPoT: better frozen model adaptation through soft prompt transfer[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 5039-5059. [75] WU K, WU E, ZOU J. ClashEval: quantifying the tug-of-war between an LLM’s internal prior and external evidence[J]. arXiv:2404.10198, 2024. [76] RU D, QIU L, HU X, et al. RAGchecker: a fine-grained framework for diagnosing retrieval-augmented generation[C]//Proceedings of the Conference on Neural Information Processing Systems, 2024. [77] CHEN J W, LIN H Y, HAN X P, et al. Benchmarking large language models in retrieval-augmented generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 17754-17762. [78] LIU Y, HUANG L, LI S, et al. Recall: a benchmark for LLMs robustness against external counterfactual knowledge[J]. arXiv:2311.08147, 2023. [79] THAKUR N, BONIFACIO L, ZHANG X, et al. NoMIRACL: knowing when you don’t know for robust multilingual retrieval-augmented generation[J]. arXiv:2312.11361, 2023. [80] ES S, JAMES J, ESPINOSA ANKE L, et al. RAGAs: automated evaluation of retrieval augmented generation[C]//Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Stroudsburg: ACL, 2024: 150-158. [81] SAAD-FALCON J, KHATTAB O, POTTS C, et al. ARES: an automated evaluation framework for retrieval-augmented generation systems[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2024: 338-354. [82] LYU Y J, LI Z Y, NIU S M, et al. CRUD-RAG: a comprehensive Chinese benchmark for retrieval-augmented gener-ation of large language models[J]. ACM Transactions on Information Systems, 2025, 43(2): 1-32. [83] TANG Y, YANG Y. Multihop-RAG: benchmarking retrieval-augmented generation for multi-hop queries[J]. arXiv:2401. 15391, 2024. [84] XIONG G Z, JIN Q, LU Z Y, et al. Benchmarking retrieval-augmented generation for medicine[C]//Proceedings of the Association for Computational Linguistics: ACL 2024. Stroudsburg: ACL, 2024: 6233-6251. [85] ZHU K, LUO Y, XU D, et al. RAGEval: scenario specific rag evaluation dataset generation framework[J]. arXiv:2408. 01262, 2024. [86] SIMON S, MAILACH A, DORN J, et al. A methodology for evaluating RAG systems: a case study on configuration dependency validation[J]. arXiv:2410.08801, 2024. [87] YASUNAGA M, AGHAJANYAN A, SHI W, et al. Retrieval-augmented multimodal language modeling[J]. arXiv:2211. 12561, 2022. [88] CHAN D M, GHOSH S, RASTROW A, et al. Using external off-policy speech-to-text mappings in contextual end-to-end automated speech recognition[J]. arXiv:2301.02736, 2023. [89] NASHID N, SINTAHA M, MESBAH A. Retrieval-based prompt selection for code-related few-shot learning[C]//Proceedings of the IEEE/ACM 45th International Conference on Software Engineering. Piscataway: IEEE, 2023: 2450-2462. [90] XU P, PING W, WU X, et al. Retrieval meets long context large language models[C]//Proceedings of the International Conference on Learning Representations, 2024. [91] KORTUKOV E, RUBINSTEIN A, NGUYEN E, et al. Studying large language model behaviors under context-memory conflicts with real documents[J]. arXiv:2402.16032, 2024. [92] GUTIéRREZ B J, SHU Y, GU Y, et al. HippoRAG: neurobiologically inspired long-term memory for large language models[J]. arXiv:2405.14831, 2024. [93] XU Z T, CRUZ M J, GUEVARA M, et al. Retrieval-augmented generation with knowledge graphs for customer service question answering[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 2905-2909. [94] SARMAH B, MEHTA D, HALL B, et al. HybridRAG: integrating knowledge graphs and vector retrieval augmented generation for efficient information extraction[C]//Proceedings of the 5th ACM International Conference on AI in Finance. New York: ACM, 2024: 608-616. [95] YUAN Y, LIU C, YUAN J, et al. A HybridRAG system with comprehensive enhancement on complex reasoning[J]. arXiv:2408.05141, 2024. [96] SU W H, TANG Y C, AI Q Y, et al. DRAGIN: dynamic retrieval augmented generation based on the real-time information needs of large language models[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2024: 12991-13013. [97] YAN S Q, GU J C, ZHU Y, et al. Corrective retrieval augmented generation[J]. arXiv:2401.15884, 2024. [98] QIAN H, ZHANG P, LIU Z, et al. MemoRAG: moving tow-ards next-gen RAG via memory-inspired knowledge discovery[J]. arXiv:2409.05591, 2024. [99] WU J D, ZHU J Y, QI Y L. Medical graph RAG: towards safe medical large language model via graph retrieval-augmented generation[J]. arXiv:2408.04187, 2024. [100] WIRATUNGA N, ABEYRATNE R, JAYAWARDENA L, et al. CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering[C]//Proceedings of the Case-Based Reasoning Research and Development. Cham: Springer Nature Switzerland, 2024: 445-460. [101] NGUYEN L, QUAN T. URAG: implementing a unified hybrid RAG for precise answers in university admission Chatbots—a case study at HCMUT[J]. arXiv:2501.16276, 2025. |
| [1] | 韩明, 曹智轩, 王敬涛, 段丽英, 王剑宏. 基于大语言模型的企业碳排放分析与知识问答系统[J]. 计算机工程与应用, 2025, 61(16): 370-382. |
| [2] | 郭茂祖, 张欣欣, 赵玲玲, 张庆宇. 结构地震响应预测大语言模型[J]. 计算机工程与应用, 2025, 61(16): 132-145. |
| [3] | 李晓理, 刘春芳, 耿劭坤. 知识图谱与大语言模型协同共生模式及其教育应用综述[J]. 计算机工程与应用, 2025, 61(15): 1-13. |
| [4] | 魏谦强, 赵书良, 卢丹琦, 贾晓文, 杨世龙. 预训练语言模型特征增强的多跳知识库问答[J]. 计算机工程与应用, 2024, 60(22): 184-196. |
| [5] | 苏尤丽, 胡宣宇, 马世杰, 张雨宁, 阿布都克力木·阿布力孜, 哈里旦木·阿布都克里木. 人工智能在中医诊疗领域的研究综述[J]. 计算机工程与应用, 2024, 60(16): 1-18. |
| [6] | 田雨晴, 汪春梅, 袁非牛. 融合局部特征的多知识库常识问答模型[J]. 计算机工程与应用, 2024, 60(12): 129-135. |
| [7] | 李晋荣, 吕国英, 李茹, 柴清华, 王超. 结合Hybrid Attention机制和BiLSTM-CRF的汉语否定语义表示及标注[J]. 计算机工程与应用, 2023, 59(9): 167-175. |
| [8] | 单晓欢, 齐鑫傲, 宋宝燕, 张浩林. 融合多特征图及实体影响力的领域实体消歧[J]. 计算机工程与应用, 2023, 59(5): 305-311. |
| [9] | 蔡银琼, 范意兴, 郭嘉丰, 张儒清. 基于多表达的第一阶段语义检索模型[J]. 计算机工程与应用, 2023, 59(4): 139-146. |
| [10] | 陈阳, 万卫兵. 多通道特征融合的实体链接模型泛化性能优化[J]. 计算机工程与应用, 2023, 59(16): 125-134. |
| [11] | 王勇, 江洋, 王红滨, 侯莎. 面向科技情报分析的知识库构建方法[J]. 计算机工程与应用, 2022, 58(22): 142-149. |
| [12] | 汶东震, 张帆, 刘海峰, 杨亮, 徐博, 林原, 林鸿飞. 深度程序理解视角下代码搜索研究综述[J]. 计算机工程与应用, 2022, 58(20): 63-72. |
| [13] | 李帅驰, 杨志豪, 王鑫雷, 韩钦宇, 林鸿飞. 基于特征增强的开放域知识库问答系统[J]. 计算机工程与应用, 2022, 58(17): 206-212. |
| [14] | 杜雨菲,吴保国,陈栋. 基于产生式规则的乔灌木识别推理算法研究[J]. 计算机工程与应用, 2020, 56(5): 242-250. |
| [15] | 张芳容,杨青. 知识库问答系统中实体关系抽取方法研究[J]. 计算机工程与应用, 2020, 56(11): 219-224. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||