
Computer Engineering and Applications ›› 2025, Vol. 61 ›› Issue (20): 19-35.DOI: 10.3778/j.issn.1002-8331.2501-0061
• Research Hotspots and Reviews • Previous Articles Next Articles
WU Xuan, FU Tao
Online:2025-10-15
Published:2025-10-15
吴璇,付涛
WU Xuan, FU Tao. Comprehensive Review of Retrieval-Augmented Generation[J]. Computer Engineering and Applications, 2025, 61(20): 19-35.
吴璇, 付涛. 检索增强生成技术研究综述[J]. 计算机工程与应用, 2025, 61(20): 19-35.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2501-0061
| [1] CUI Y, JIA M L, LIN T Y, et al. Class-balanced loss based on effective number of samples[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 9260-9269. [2] LIU Z W, MIAO Z Q, ZHAN X H, et al. Large-scale long-tailed recognition in an open world[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2532-2541. [3] ZHANG S Y, CHEN C, HU X Y, et al. Balanced knowledge distillation for long-tailed learning[J]. Neurocomputing, 2023, 527: 36-46. [4] MALLEN A, ASAI A, ZHONG V, et al. When not to trust language models: investigating effectiveness of parametric and non-parametric memories[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 9802-9822. [5] FENG Z, MA W, YU W, et al. Trends in integration of knowledge and large language models: a survey and taxonomy of methods, benchmarks, and applications[J]. arXiv:2311.05876, 2023. [6] LIU X, LAI H Y, YU H, et al. WebGLM: towards an efficient web-enhanced question answering system with human preferences[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2023: 4549-4560. [7] HUANG L, YU W J, MA W T, et al. A survey on hallucin-ation in large language models: principles, taxonomy, challenges, and open questions[J]. ACM Transactions on Information Systems, 2025, 43(2): 1-55. [8] ZHAO H Y, CHEN H J, YANG F, et al. Explainability for large language models: a survey[J]. ACM Transactions on Intelligent Systems and Technology, 2024, 15(2): 1-38. [9] CHOUDHARY M, DU X Y. QAEVENT: event extraction as question-answer pairs generation[C]//Findings of the Association for Computational Linguistics: EACL 2024, 2024: 1860-1873. [10] ZHANG C, ZHANG H, SUN Y C, et al. Downstream transformer generation of question-answer pairs with preprocessing and postprocessing pipelines[C]//Proceedings of the 22nd ACM Symposium on Document Engineering. New York: ACM, 2022: 1-8. [11] LING J T, AFZAAL M. Automatic question-answer pairs generation using pre-trained large language models in higher education[J]. Computers and Education: Artificial Intelligence, 2024, 6: 100252. [12] LI H, SU Y, CAI D, et al. A survey on retrieval-augmented text generation[J]. arXiv:2202.01110, 2022. [13] WU S, XIONG Y, CUI Y, et al. Retrieval-augmented gener-ation for natural language processing: a survey[J]. arXiv:2407. 13193, 2024. [14] ZHAO P, ZHANG H, YU Q, et al. Retrieval-augmented generation for AI-generated content: a survey[J]. arXiv:2402. 19473, 2024. [15] YU H, GAN A R, ZHANG K, et al. Evaluation of retrieval-augmented generation: a survey[C]//Proceedings of the CCF Conference on Big Data. Singapore: Springer Nature Singapore, 2025: 102-120. [16] PENG B, ZHU Y, LIU Y, et al. Graph retrieval-augmented generation: a survey[J]. arXiv:2408.08921, 2024. [17] LEWIS P, PEREZ E, PIKTUS A, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks[C]//Advances in Neural Information Processing Systems, 2020: 9459-9474. [18] CHEN T, WANG H W, CHEN S H, et al. Dense X retrieval: what retrieval granularity should we use?[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 15159-15177. [19] DUARTE A V, MARQUES J D, GRA?A M, et al. LumberChunker: long-form narrative document segmentation[C]//Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg: ACL, 2024: 6473-6486. [20] ZHAO J, JI Z, QI P, et al. Meta-Chunking: learning efficient text segmentation via logical perception[J]. arXiv:2410.12788, 2024. [21] 文森, 钱力, 胡懋地, 等. 基于大语言模型的问答技术研究进展综述[J]. 数据分析与知识发现, 2024, 8(6): 16-29. WEN S, QIAN L, HU M D, et al. Review of research progress on question-answering techniques based on large language models[J]. Data Analysis and Knowledge Discovery, 2024, 8(6): 16-29. [22] 赵悦阳, 崔雷. 文本嵌入技术的研究与应用进展[J]. 数据与计算发展前沿, 2023, 5(3): 92-110. ZHAO Y Y, CUI L. Progress in research and application of text embedding technology[J]. Frontiers of Data & Computing, 2023, 5(3): 92-110. [23] HARRIS Z S. Distributional structure[M]. Cham: Springer, 1954. [24] BENGIO Y, DUCHARME R, VINCENT P, et al. A neural probabilistic language model[J]. Journal of Machine Learning Research, 2003, 3: 1137-1155. [25] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2013: 3111-3119. [26] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1532-1543. [27] BOJANOWSKI P, GRAVE E, JOULIN A, et al. Enriching word vectors with subword information[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 135-146. [28] XIAO S T, LIU Z, ZHANG P T, et al. C-pack: packed resources for general Chinese embeddings[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 641-649. [29] WANG L, YANG N, HUANG X, et al. Text embeddings by weakly-supervised contrastive pre-training[J]. arXiv:2212.03533, 2022. [30] ILIN I. Advanced rag techniques: an illustrated overview[EB/OL]. (2023-10-17)[2024-12-15]. https://pub.towardsai.net/adva-nced-rag-techniques-an-illustrated-overview-04d193d8fec6. [31] WANG Y, LIPKA N, ROSSI R A, et al. Knowledge graph prompting for multi-document question answering[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 19206-19214. [32] FAYSSE M, SIBILLE H, WU T, et al. ColPali: efficient document retrieval with vision language models[J]. arXiv:2407. 01449, 2024. [33] JEONG S, KIM K, BAEK J, et al. VideoRAG: retrieval-augmented generation over video corpus[J]. arXiv:2501.05874, 2025. [34] CHO J, MAHATA D, IRSOY O, et al. M3docrag: multi-modal retrieval is what you need for multi-page multi-document understanding[J]. arXiv:2411.04952, 2024. [35] SURI M, MATHUR P, DERNONCOURT F, et al. VisDoM: multi-document QA with visually rich elements using multimodal retrieval-augmented generation[J]. arXiv:2412.10704, 2024. [36] SUN Q, FANG Y, WU L, et al. EVA-CLIP: improved training techniques for clip at scale[J]. arXiv:2303.15389, 2023. [37] XU F, SHI W, CHOI E. RECOMP: improving retrieval-augmented LMs with context compression and selective augmentation[C]//Proceedings of the International Conference on Learning Representations, 2024. [38] IZACARD G, GRAVE E. Leveraging passage retrieval with generative models for open domain question answering[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg: ACL, 2021: 874-880. [39] IZACARD G, LEWIS P, LOMELI M, et al. Atlas: few-shot learning with retrieval augmented language models[J]. Journal of Machine Learning Research, 2023, 24(1): 11912-11954. [40] MA X, GONG Y, HE P, et al. Query rewriting for retrieval-augmented large language models[J]. arXiv:2305.14283, 2023. [41] MAO S Y, JIANG Y, CHEN B L, et al. RaFe: ranking feedback improves query rewriting for RAG[C]//Proceedings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg: ACL, 2024: 884-901. [42] GAO L Y, MA X G, LIN J, et al. Precise zero-shot dense retrieval without relevance labels[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 1762-1777. [43] WANG L, YANG N, WEI F R. Query2doc: query expansion with large language models[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 9414-9423. [44] ZHOU D, SCHARLI N, HOU L, et al. Least-to-most prompting enables complex reasoning in large language models[J]. arXiv:2205.10625, 2022. [45] CHAN C M, XU C, YUAN R, et al. RQ-RAG: learning to refine queries for retrieval augmented generation[J]. arXiv: 2404.00610, 2024. [46] ZHENG H S, MISHRA S, CHEN X, et al. Take a step back: evoking reasoning via abstraction in large language models[J]. arXiv:2310.06117, 2023. [47] FINARDI P, AVILA L, CASTALDONI R, et al. The chronicles of RAG: the retriever, the chunk and the generator[J]. arXiv:2401.07883, 2024. [48] SAWARKAR K, MANGAL A, SOLANKI S R. Blended RAG: improving RAG (retriever-augmented generation) accuracy with semantic search and hybrid query-based retrievers[C]//Proceedings of the IEEE 7th International Conference on Multimedia Information Processing and Retrieval. Piscataway: IEEE, 2024: 155-161. [49] ZHANG P, XIAO S, LIU Z, et al. Retrieve anything to augment large language models[J]. arXiv:2310.07554, 2023. [50] SHI W, MIN S, YASUNAGA M, et al. REPLUG: retrieval-augmented black-box language models[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2024: 8371-8384. [51] SIRIWARDHANA S, WEERASEKERA R, WEN E, et al. Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1-17. [52] LIU Z, ZHANG L, LI Q, et al. Invar-RAG: invariant LLM-aligned retrieval for better generation[J]. arXiv:2411.07021, 2024. [53] YANG H Y, LI Z T, ZHANG Y, et al. PRCA: fitting black-box large language models for retrieval question answering via pluggable reward-driven contextual adapter[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 5364-5375. [54] ASAI A, WU Z Q, WANG Y Z, et al. Self-RAG: learning to retrieve, generate, and critique through self-reflection[J]. arXiv:2310.11511, 2023. [55] GUAN X, ZENG J, MENG F, et al. DeepRAG: thinking to retrieval step by step for large language models[J]. arXiv:2502.01142, 2025. [56] JIANG X, FANG Y, QIU R, et al. TC-RAG: turing-complete RAG’s case study on medical LLM systems[J]. arXiv:2408.09199, 2024. [57] RAU D, WANG S, DéJEAN H, et al. Context embeddings for efficient answer generation in rag[J]. arXiv:2407.09252, 2024. [58] SHI K Z, SUN X Y, LI Q, et al. Compressing long context for enhancing RAG with AMR-based concept distillation[J]. arXiv:2405.03085, 2024. [59] CHENG X, WANG X, ZHANG X X, et al. xRAG: extreme context compression for retrieval-augmented generation with one token[C]//Proceedings of the 38th International Conference on Neural Information Processing Systems, 2025: 109487-109516. [60] YU Y, PING W, LIU Z H, et al. RankRAG: unifying context ranking with retrieval-augmented generation in LLMs[J]. arXiv:2407.02485, 2024. [61] GLASS M, ROSSIELLO G, CHOWDHURY M F M, et al. Re2G: retrieve, rerank, generate[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2022: 2701-2715. [62] AMPAZIS N. Improving RAG quality for large language models with topic-enhanced reranking[C]//Proceedings of the Artificial Intelligence Applications and Innovations. Cham: Springer Nature Switzerland, 2024: 74-87. [63] KANG B, KIM J, YUN T R, et al. Prompt-RAG: pioneering vector embedding-free retrieval-augmented generation in niche domains, exemplified by Korean medicine[J]. arXiv:2401.11246, 2024. [64] DONG G T, ZHU Y T, ZHANG C H, et al. Understand what LLM needs: dual preference alignment for retrieval-augmented generation[C]//Proceedings of the ACM on Web Conference, 2025: 4206-4225. [65] LIN X V, CHEN X, CHEN M, et al. RA-DIT: retrieval-augmented dual instruction tuning[J]. arXiv:2310.01352, 2023. [66] WU Z, HU Y, SHI W, et al. Fine-grained human feedback gives better rewards for language model training[C]//Advances in Neural Information Processing Systems, 2023: 59008-59033. [67] CHENG X, LUO D, CHEN X, et al. Lift yourself up: retrieval-augmented text generation with self-memory[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems, 2023: 43780-43799. [68] KHANDELWAL U, LEVY O, JURAFSKY D, et al. Generalization through memorization: nearest neighbor language models[J]. arXiv:1911.00172, 2019. [69] WANG L, CHEN H, YANG N, et al. Chain-of-retrieval augmented generation[J]. arXiv:2501.14342, 2025. [70] 张艳萍, 陈梅芳, 田昌海, 等. 面向军事领域知识问答系统的多策略检索增强生成方法[J]. 计算机应用, 2025, 45(3): 746-754. ZHANG Y P, CHEN M F, TIAN C H, et al. Multi-strategy retrieval-augmented generation method for military domain knowledge question answering systems[J]. Journal of Computer Applications, 2025, 45(3): 746-754. [71] SHAO Z H, GONG Y Y, SHEN Y L, et al. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 9248-9274. [72] LIU N F, LIN K, HEWITT J, et al. Lost in the middle: how language models use long contexts[J]. Transactions of the Association for Computational Linguistics, 2024, 12: 157-173. [73] ZHU Y, WANG Y, MU J Y, et al. Short text classification with soft knowledgeable prompt-tuning[J]. Expert Systems with Applications, 2024, 246: 123248. [74] VU T, LESTER B, CONSTANT N, et al. SPoT: better frozen model adaptation through soft prompt transfer[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 5039-5059. [75] WU K, WU E, ZOU J. ClashEval: quantifying the tug-of-war between an LLM’s internal prior and external evidence[J]. arXiv:2404.10198, 2024. [76] RU D, QIU L, HU X, et al. RAGchecker: a fine-grained framework for diagnosing retrieval-augmented generation[C]//Proceedings of the Conference on Neural Information Processing Systems, 2024. [77] CHEN J W, LIN H Y, HAN X P, et al. Benchmarking large language models in retrieval-augmented generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 17754-17762. [78] LIU Y, HUANG L, LI S, et al. Recall: a benchmark for LLMs robustness against external counterfactual knowledge[J]. arXiv:2311.08147, 2023. [79] THAKUR N, BONIFACIO L, ZHANG X, et al. NoMIRACL: knowing when you don’t know for robust multilingual retrieval-augmented generation[J]. arXiv:2312.11361, 2023. [80] ES S, JAMES J, ESPINOSA ANKE L, et al. RAGAs: automated evaluation of retrieval augmented generation[C]//Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. Stroudsburg: ACL, 2024: 150-158. [81] SAAD-FALCON J, KHATTAB O, POTTS C, et al. ARES: an automated evaluation framework for retrieval-augmented generation systems[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2024: 338-354. [82] LYU Y J, LI Z Y, NIU S M, et al. CRUD-RAG: a comprehensive Chinese benchmark for retrieval-augmented gener-ation of large language models[J]. ACM Transactions on Information Systems, 2025, 43(2): 1-32. [83] TANG Y, YANG Y. Multihop-RAG: benchmarking retrieval-augmented generation for multi-hop queries[J]. arXiv:2401. 15391, 2024. [84] XIONG G Z, JIN Q, LU Z Y, et al. Benchmarking retrieval-augmented generation for medicine[C]//Proceedings of the Association for Computational Linguistics: ACL 2024. Stroudsburg: ACL, 2024: 6233-6251. [85] ZHU K, LUO Y, XU D, et al. RAGEval: scenario specific rag evaluation dataset generation framework[J]. arXiv:2408. 01262, 2024. [86] SIMON S, MAILACH A, DORN J, et al. A methodology for evaluating RAG systems: a case study on configuration dependency validation[J]. arXiv:2410.08801, 2024. [87] YASUNAGA M, AGHAJANYAN A, SHI W, et al. Retrieval-augmented multimodal language modeling[J]. arXiv:2211. 12561, 2022. [88] CHAN D M, GHOSH S, RASTROW A, et al. Using external off-policy speech-to-text mappings in contextual end-to-end automated speech recognition[J]. arXiv:2301.02736, 2023. [89] NASHID N, SINTAHA M, MESBAH A. Retrieval-based prompt selection for code-related few-shot learning[C]//Proceedings of the IEEE/ACM 45th International Conference on Software Engineering. Piscataway: IEEE, 2023: 2450-2462. [90] XU P, PING W, WU X, et al. Retrieval meets long context large language models[C]//Proceedings of the International Conference on Learning Representations, 2024. [91] KORTUKOV E, RUBINSTEIN A, NGUYEN E, et al. Studying large language model behaviors under context-memory conflicts with real documents[J]. arXiv:2402.16032, 2024. [92] GUTIéRREZ B J, SHU Y, GU Y, et al. HippoRAG: neurobiologically inspired long-term memory for large language models[J]. arXiv:2405.14831, 2024. [93] XU Z T, CRUZ M J, GUEVARA M, et al. Retrieval-augmented generation with knowledge graphs for customer service question answering[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 2905-2909. [94] SARMAH B, MEHTA D, HALL B, et al. HybridRAG: integrating knowledge graphs and vector retrieval augmented generation for efficient information extraction[C]//Proceedings of the 5th ACM International Conference on AI in Finance. New York: ACM, 2024: 608-616. [95] YUAN Y, LIU C, YUAN J, et al. A HybridRAG system with comprehensive enhancement on complex reasoning[J]. arXiv:2408.05141, 2024. [96] SU W H, TANG Y C, AI Q Y, et al. DRAGIN: dynamic retrieval augmented generation based on the real-time information needs of large language models[C]//Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2024: 12991-13013. [97] YAN S Q, GU J C, ZHU Y, et al. Corrective retrieval augmented generation[J]. arXiv:2401.15884, 2024. [98] QIAN H, ZHANG P, LIU Z, et al. MemoRAG: moving tow-ards next-gen RAG via memory-inspired knowledge discovery[J]. arXiv:2409.05591, 2024. [99] WU J D, ZHU J Y, QI Y L. Medical graph RAG: towards safe medical large language model via graph retrieval-augmented generation[J]. arXiv:2408.04187, 2024. [100] WIRATUNGA N, ABEYRATNE R, JAYAWARDENA L, et al. CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering[C]//Proceedings of the Case-Based Reasoning Research and Development. Cham: Springer Nature Switzerland, 2024: 445-460. [101] NGUYEN L, QUAN T. URAG: implementing a unified hybrid RAG for precise answers in university admission Chatbots—a case study at HCMUT[J]. arXiv:2501.16276, 2025. |
| [1] | JIANG Shuangwu, ZHANG Jiawei, HUA Liansheng, YANG Jinglin. Implementation of Meteorological Database Question-Answering Based on Large-Scale Model Retrieval-Augmentation Generation [J]. Computer Engineering and Applications, 2025, 61(5): 113-121. |
| [2] | HAN Ming, CAO Zhixuan, WANG Jingtao, DUAN Liying, WANG Jianhong. Enterprise Carbon Emission Analysis and Knowledge Question-Answering System Based on Large Language Models [J]. Computer Engineering and Applications, 2025, 61(16): 370-382. |
| [3] | GUO Maozu, ZHANG Xinxin, ZHAO Lingling, ZHANG Qingyu. Seismic Response Prediction of Structures Using Large Language Models [J]. Computer Engineering and Applications, 2025, 61(16): 132-145. |
| [4] | WEI Qianqiang, ZHAO Shuliang, LU Danqi, JIA Xiaowen, YANG Shilong. Multi-Hop Knowledge Base Question Answering with Pre-Trained Language Model Feature Enhancement [J]. Computer Engineering and Applications, 2024, 60(22): 184-196. |
| [5] | SU Youli, HU Xuanyu, MA Shijie, ZHANG Yuning, Abudukelimu Abulizi, Halidanmu Abudukelimu. Review of Research on Artificial Intelligence in Traditional Chinese Medicine Diagnosis and Treatment [J]. Computer Engineering and Applications, 2024, 60(16): 1-18. |
| [6] | TIAN Yuqing, WANG Chunmei, YUAN Feiniu. Multi-Knowledge Base Common Sense Question Answering Model Based on Local Feature Fusion [J]. Computer Engineering and Applications, 2024, 60(12): 129-135. |
| [7] | LI Jinrong, LYU Guoying, LI Ru, CHAI Qinghua, WANG Chao. Chinese Negative Semantic Representation and Annotation Combined with Hybrid Attention Mechanism and BiLSTM-CRF [J]. Computer Engineering and Applications, 2023, 59(9): 167-175. |
| [8] | SHAN Xiaohuan, QI Xin’ao, SONG Baoyan, ZHANG Haolin. Domain Entity Disambiguation Combining Multi-Feature Graph and Entity Influence [J]. Computer Engineering and Applications, 2023, 59(5): 305-311. |
| [9] | CAI Yinqiong, FAN Yixing, GUO Jiafeng, ZHANG Ruqing. Multi-Representation Model for the First-Stage Semantic Retrieval [J]. Computer Engineering and Applications, 2023, 59(4): 139-146. |
| [10] | CHEN Yang, WAN Weibing. Generalization Performance Optimization of Entity Link Models Based on Multi-Channel Feature Fusion [J]. Computer Engineering and Applications, 2023, 59(16): 125-134. |
| [11] | WANG Yong, JIANG Yang, WANG Hongbin, HOU Sha. Knowledge Base Construction Method for Scientific and Technical Information Analysis [J]. Computer Engineering and Applications, 2022, 58(22): 142-149. |
| [12] | WEN Dongzhen, ZHANG Fan, LIU Haifeng, YANG Liang, XU Bo, LIN Yuan, LIN Hongfei. Code Search Review:from Perspective of Deep Program Comprehension [J]. Computer Engineering and Applications, 2022, 58(20): 63-72. |
| [13] | LI Shuaichi, YANG Zhihao, WANG Xinlei, HAN Qinyu, LIN Hongfei. Open Domain Chinese Knowledge Based Question Answering Based on Feature Enhancement [J]. Computer Engineering and Applications, 2022, 58(17): 206-212. |
| [14] | DU Yufei, WU Baoguo, CHEN Dong. Study of Trees and Shrubs Recognition Inference Algorithm Based on Production Rules [J]. Computer Engineering and Applications, 2020, 56(5): 242-250. |
| [15] | WANG Lingyang, CHEN Qinkuang, SHOU Lidan, CHEN Ke. Reserch of Entity Matching Based on Multiple Heterogenous Data [J]. Computer Engineering and Applications, 2019, 55(19): 87-95. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||