
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (15): 1-13.DOI: 10.3778/j.issn.1002-8331.2410-0481
李晓理,刘春芳,耿劭坤
出版日期:2025-08-01
发布日期:2025-07-31
LI Xiaoli, LIU Chunfang, GENG Shaokun
Online:2025-08-01
Published:2025-07-31
摘要: 近年,人工智能技术,特别是大语言模型、知识图谱技术的迅速发展,为教育的数字化、智能化转型提供了重要的技术条件。分别分析了大语言模型与知识图谱两技术在智能教育领域的应用优势、现状以及存在的问题。在此基础上,深入探讨了知识图谱与大语言模型的协同共生模式,包括两者相互增强的方式方法,并对协同技术研究现状进行了归纳分析,总结了近年来在教育领域的相关应用。最后,对知识图谱与大语言模型技术联合应用于教育领域的发展趋势进行了总结与展望。
李晓理, 刘春芳, 耿劭坤. 知识图谱与大语言模型协同共生模式及其教育应用综述[J]. 计算机工程与应用, 2025, 61(15): 1-13.
LI Xiaoli, LIU Chunfang, GENG Shaokun. Survey of Collaborative Symbiosis Mode Between Knowledge Graph and Large Language Model and Its Education Application[J]. Computer Engineering and Applications, 2025, 61(15): 1-13.
| [1] 黄勃, 吴申奥, 王文广, 等. 图模互补: 知识图谱与大模型融合综述[J]. 武汉大学学报(理学版), 2024, 70(4): 397-412. HUANG B, WU S A, WANG W G, et al. KG-LLM-MCom: a survey on integration of knowledge graph and large language model[J]. Journal of Wuhan University (Natural Science Edition), 2024, 70(4): 397-412. [2] SUCHANEK F M, KASNECI G, WEIKUM G. YAGO: a large ontology from wikipedia and WordNet[J]. Journal of Web Semantics, 2008, 6(3): 203-217. [3] BOLLACKER K, EVANS C, PARITOSH P, et al. Freebase: a collaboratively created graph database for structuring human knowledge[C]//Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. New York: ACM, 2008: 1247-1250. [4] 王昊奋, 漆桂林, 陈华钧. 知识图谱方法、实践与应用[M]. 北京: 电子工业出版社, 2019: 65-66. WANG H F, QI G L, CHEN H J. Knowledge graph: methods, practices, and applications[M]. Beijing: Publishing House of Electronics Industry, 2019: 65-66. [5] ZHANG N Y, LI L, CHEN X, et al. Multimodal analogical reasoning over knowledge graphs[C]//Proceedings of the 2022 ACM SIGMOD International Conference on Management of Data. New York: ACM, 2022: 1-20. [6] JI S X, PAN S R, CAMBRIA E, et al. A survey on knowledge graphs: representation, acquisition, and applications[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(2): 494-514. [7] 刘登辉. 基于注意力机制的知识图谱嵌入技术研究与应用[D]. 曲阜: 曲阜师范大学, 2023. LIU D H. Research and application of knowledge graph embedding technology based on attention mechanism[D]. Qufu: Qufu Normal University, 2023. [8] 肖仰华, 徐波, 林欣, 等. 知识图谱概念与技术[M]. 北京: 电子工业出版社, 2020: 160-186. XIAO Y H, XU B, LIN X, et al. Knowledge graph: concepts and techniques[M]. Beijing: Publishing House of Electronics Industry, 2020: 160-186. [9] ZHONG L F, WU J, LI Q, et al. A comprehensive survey on automatic knowledge graph construction[J]. ACM Computing Surveys, 2023, 56(4): 1-62. [10] GUAN H L. An online education course recommendation met-hod based on knowledge graphs and reinforcement learning[J]. Journal of Circuits, Systems and Computers, 2023, 32(6): 1-20. [11] 童世炜. 面向在线自适应学习的学习者认知状态建模方法[D]. 合肥: 中国科学技术大学, 2023. TONG S W. Cognitive state modeling for the students in online adaptive learning[D]. Hefei: University of Science and Technology of China, 2023. [12] 李子亮, 李兴春. 《科学计算与MATLAB语言》课程知识图谱的构建[J]. 智能计算机与应用, 2025, 15(2): 116-123. LI Z L, LI X C. Construction of the knowledge graph for the course “Scientific Computing and MATLAB Language”[J]. Intelligent Computer and Applications, 2025, 15(2): 116-123. [13] 裴壮, 田秀霞, 李冰雪. 知识图谱赋能的面向对象程序设计C++教学改革与实践[J]. 华东师范大学学报(自然科学版), 2024(5): 104-113. PEI Z, TIAN X X, LI B X. Knowledge graph empowered object-oriented programming C++ teaching reform and practice[J]. Journal of East China Normal University (Natural Science), 2024(5): 104-113. [14] 杜治娟. 知识图谱赋能的离散数学教学实践[J]. 计算机教育, 2024(6): 114-119. DU Z J. Teaching practice of discrete mathematics with knowledge map empowerment[J]. Computer Education, 2024(6): 114-119. [15] 和文斌, 董永权, 赵成杰, 等. 基于学科知识图谱的教育知识服务模型构建研究[J]. 数字教育, 2022, 8(6): 21-28. HE W B, DONG Y Q, ZHAO C J, et al. Research on the construction of educational knowledge service model based on discipline knowledge graph[J]. Digital Education, 2022, 8(6): 21-28. [16] 冯筠, 刘星雨, 栗凯旋, 等. 课程知识图谱自动构建综述[J]. 计算机技术与发展, 2025, 35(1): 1-11. FENG J, LIU X Y, LI K X, et al. Survey of automatic construction of course knowledge graph[J]. Computer Technology and Development, 2025, 35(1): 1-11. [17] ZHAO W X, ZHOU K, LI J, et al. A survey of large language models[J]. arXiv:2303.18223, 2023. [18] TEAM G, ANIL R, BORGEAUD S, et al. Gemini: a family of highly capable multimodal models[J]. arXiv:2312.11805, 2023. [19] BROOKS T, PEEBLES B, HOLMES C, et al. Video gene-ration models as world simulators [EB/OL]. (2024-02-15) [2024-10-30]. https://openai.com/index/video-generation-models-as-world-simulators. [20] OUYANG L, WU J, XU J, et al. Training language models to follow instructions with human feedback[C]//Advances in Neural Information Processing Systems, 2022: 27730-27744. [21] CAI S, BAO K, GUO H, et al. GeoGPT4V: towards geometric multi-modal large language models with geometric image generation[J]. arXiv:2406.11503, 2024. [22] LIU Y, YAO Y, TON J F, et al. Trustworthy LLMs: a survey and guideline for evaluating large language models’alignment [J]. arXiv:2308.05374, 2023. [23] KUO B C, CHANG F T. Y. , BAI Z E. Leveraging LLMs for adaptive testing and learning in Taiwan adaptive lear-ning platform(TALP)[C]//Proceedings of the AIED 2023 Workshop on Empowering Education with LLMs—the Next-Gen Interface and Content Generation. Cham: Springer, 2023: 101-110. [24] KRUPP L, STEINERT S, KIEFER-EMMANOUILIDIS M, et al. Challenges and opportunities of moderating usage of large language models in education[C]//Proceedings of the International Conference on Mobile and Ubiquitous Multimedia. New York: ACM, 2024: 249-254. [25] 周加贝, 林祎, 霍佳鑫, 等. 智慧引领, 学以致用: 人工智能大模型在四川大学本科教学中的应用探索[J]. 数字与缩微影像, 2024(3): 11-14. ZHOU J B, LIN Y, HUO J X, et al. Guided by wisdom, applying what you have learned: the application of artificial intelligence model in undergraduate teaching of Sichuan University[J]. Digital & Micrographic Imaging, 2024(3): 11-14. [26] ABD-ALRAZAQ A, ALSAAD R, ALHUWAIL D, et al. Large language models in medical education: opportunities, challenges, and future directions[J]. JMIR Medical Education, 2023, 9: e48291. [27] KASNECI E, SESSLER K, KüCHEMANN S, et al. Chat-GPT for good on opportunities and challenges of large language models for education[J]. Learning and Individual Diferences, 2023, 103: 102274. [28] AGRAWAL G, KUMARAGE T, ALGHAMDI Z, et al. Can knowledge graphs reduce hallucinations in LLMs: a survey[C]//Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2024: 3947-3960. [29] GUAN X Y, LIU Y J, LIN H Y, et al. Mitigating large language model hallucinations via autonomous knowledge graph-based retrofitting[C]//Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence. New York: ACM, 2024: 18126-18134. [30] BESTA M, BLACH N, KUBICEK A, et al. Graph of thoughts: solving elaborate problems with large language models[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 17682-17690. [31] TURPIN M, MICHAEL J, PEREZ E, et al. Language models don’t always say what they think: unfaithful explanations in chain-of-thought prompting[C]//Advances in Neural Information Processing Systems, 2023: 74952-74965. [32] MONDAL D, MODI S, PANDA S, et al. KAM-CoT: knowledge augmented multimodal chain-of-thoughts reasoning[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 18798-18806. [33] WANG X, YANG Q, QIU Y, et al. KnowledGPT: enhancing large language models with retrieval and storage access on knowledge bases[J]. arXiv:2308.11761. 2023. [34] CHEN Z C, CHEN J D, SINGH A, et al. XplainLLM: a knowledge-augmented dataset for reliable grounded explanations in LLMs[C]//Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 7578-7596. [35] DAI D M, DONG L, HAO Y R, et al. Knowledge neurons in pretrained transformers[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 8493-8502. [36] WADHWA S, HASSANZADEH O, BHATTACHARJYA D, et al. Distilling event sequence knowledge from large language models[C]//Proceedings of the International Semantic Web Conference. Cham: Springer, 2025: 237-255. [37] LAIRGI Y, MONCLA L, CAZABET R, et al. iText2KG: incremental knowledge graphs construction using large language models[C]//Proceedings of the International Conference on Web Information Systems Engineering. Singapore: Springer, 2025: 214-229. [38] ZHANG B W, SOH H. Extract, define, canonicalize: an LLM-based framework for knowledge graph construction[C]//Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 9820-9836. [39] CHEN B H, BERTOZZI A L. AutoKG: efficient automated knowledge graph generation for language models[C]//Proceedings of the 2023 IEEE International Conference on Big Data. Piscataway: IEEE, 2023: 3117-3126. [40] LV X, LIN Y K, CAO Y X, et al. Do pre-trained models benefit knowledge graph completion a reliable evaluation and a reasonable approach[C]//Findings of the Association for Computational Linguistics(ACL 2022). Stroudsburg: ACL, 2022: 3570-3581. [41] ZHANG Y C, CHEN Z, GUO L B, et al. Making large language models perform better in knowledge graph completion[C]//Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 233-242. [42] YAO L, PENG J Z, MAO C S, et al. Exploring large language models for knowledge graph completion[C]//Proceedings of the 2025 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2025: 1-5. [43] LI D, TAN Z, CHEN T, et al. Contextualization distillation from large language model for knowledge graph completion[C]//Findings of the Association for Computational Linguistics(EACL 2024). Stroudsburg, PA: Association for Computational Linguistics, 2024: 9820-9836. [44] XU D, ZHANG Z, LIN Z, et al. Multi-perspective improvement of knowledge graph completion with large language models[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). Paris: ELRA and ICCL, 2024: 11956-11968. [45] YANG R, ZHU J, MAN J, et al. Exploiting large language models capabilities for question answer-driven knowledge graph completion across static and temporal domains[J]. arXiv:2408.10819, 2024. [46] HUANG Y. Evaluating and enhancing large language models for conversational reasoning on knowledge graphs[J]. arXiv:2312.11282, 2023. [47] JUNG J, QIN L H, WELLECK S, et al. Maieutic prompting: logically consistent reasoning with recursive explanations[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 1266-1279. [48] CHOUDHARY N, REDDY C K. Complex logical reasoning over knowledge graphs using large language models[J]. arXiv:2305.01157, 2023. [49] SUN J, XU C, TANG L, et al. Think-on-graph: deep and responsible reasoning of large language model on knowledge graph[C]//Proceedings of the 12th International Conference on Learning Representations(ICLR 2024). New York: Curran Associates, Inc, 2024. [50] TAFFA T A, USBECK R. Leveraging LLMs in scholarly knowledge graph question answering[C]//Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with 22nd International Semantic Web Conference. Cham: Springer Nature Switzerland, 2023. [51] HU Z N, XU Y C, YU W H, et al. Empowering language models with knowledge graph reasoning for open-domain question answering[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 9562-9581. [52] WU Y K, HUANG Y, HU N, et al. CoTKR: chain-of-thought enhanced knowledge rewriting for complex knowledge graph question answering[C]//Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2024: 3501-3520. [53] PAN S R, LUO L H, WANG Y F, et al. Unifying large language models and knowledge graphs: a roadmap[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(7): 3580-3599. [54] 王鑫, 陈子睿, 王昊奋. 知识图谱与大语言模型协同模式探究[J]. 中国计算机学会通讯, 2023, 19(11): 10-16. WANG X, CHEN Z R, WANG H F. Exploring synergy between KGs and LLMs[J]. Communications of the CCF, 2023, 19(11): 10-16. [55] 陈佼, 甄化春. 大模型时代下知识图谱应用的新范式[J]. 中国计算机学会通讯, 2023, 19(11): 38-45. CHEN J, ZHEN H C. A new paradigm for the application of knowledge graphs in the era of LLMs[J]. Communications of the CCF, 2023, 19(11): 38-45. [56] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[C]//Advances in Neural Information Processing Systems, 2022: 22199-22213. [57] QIAO S F, OU Y X, ZHANG N Y, et al. Reasoning with language model prompting: a survey[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 5368-5393. [58] HU L M, LIU Z Y, ZHAO Z W, et al. A survey of knowledge enhanced pre-trained language models[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(4): 1413-1430. [59] 陈华钧, 张宁豫, 张文, 等. 大模型增强的语言与知识推理[J]. 中国计算机学会通讯, 2023, 19(11): 17-25. CHEN H J, ZHANG N Y, ZHANG W, et al. Augmented language reasoning and knowledge reasoning with large models[J]. Communications of the CCF, 2023, 19(11): 17-25. [60] ZHU Y Q, WANG X H, CHEN J, et al. LLMs for knowledge graph construction and reasoning: recent capabilities and future opportunities[J]. World Wide Web, 2024, 27(5): 58. [61] ZHANG N, ZHANG J, WANG X, et al. KnowLM technical report[EB/OL]. [2024-12-25]. http://knowlm.zjukg.cn/. [62] LIU P F, QIAN L, ZHAO X W, et al. Joint knowledge graph and large language model for fault diagnosis and its application in aviation assembly[J]. IEEE Transactions on Industrial Informatics, 2024, 20(6): 8160-8169. [63] KULKARNI P S, JAIN M, SHESHANARAYANA D, et al. HeCiX: integrating knowledge graphs and large language models for biomedical research[J]. arXiv:2407.14030, 2024. [64] GAN W S, QI Z L, WU J Y, et al. Large language models in education: vision and opportunities[C]//Proceedings of the 2023 IEEE International Conference on Big Data. Piscataway: IEEE, 2023: 4776-4785. [65] XU T, MA X, WANG F, et al. Construction and application of subject knowledge graph for basic education[C]//Proceedings of the International Conference on Computer and Information Processing Technology, 2022: 58-63. [66] AGRAWAL M, HEGSELMANN S, LANG H, et al. Large language models are few-shot clinical information extractors[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 1998-2022. [67] 唐晓晟, 程琳雅, 张春红, 等. 大语言模型在学科知识图谱自动化构建上的应用[J]. 北京邮电大学学报(社会科学版), 2024, 26(1): 125-136. TANG X S, CHENG L Y, ZHANG C H, et al. Application of large language models in automated construction of knowledge graphs for university subject domains[J]. Journal of Beijing University of Posts and Telecommunications (Social Sciences Edition), 2024, 26(1): 125-136. [68] 王磊, 时亚文, 刘晓丹, 等. 基于大模型知识追踪的多模态教育知识图谱构建与应用[J]. 电脑知识与技术, 2024, 20(20): 8-10. WANG L, SHI Y W, LIU X D, et al. Construction and application of multimodal educational knowledge map based on large model knowledge tracking[J]. Computer Knowledge and Technology, 2024, 20(20): 8-10. [69] 陈欣, 亓华芳, 周同, 等. 一种基于知识图谱的课程资源关联方法[J]. 北方工业大学学报, 2024, 36(2): 119-126. CHEN X, QI H F, ZHOU T, et al. A course resource associ-ation method based on knowledge graph[J]. Journal of North China University of Technology, 2024, 36(2): 119-126. [70] 罗江华, 张玉柳. 多模态大模型驱动的学科知识图谱进化及教育应用[J]. 现代教育技术, 2023, 33(12): 76-88. LUO J H, ZHANG Y L. Evolution and educational applic-ation of discipline knowledge graph driven by multimodal large model[J]. Modern Educational Technology, 2023, 33(12): 76-88. [71] 顾小清, 刘桐. 大模型时代的智适应学习研究: 进展、实例与展望[J]. 中国教育信息化, 2024, 30(5): 55-66. GU X Q, LIU T. Research on adaptive learning in the era of large models: progress, examples, and prospects[J]. Chinese Journal of ICT in Education, 2024, 30(5): 55-66. [72] 翟洁, 李艳豪, 孟天鑫, 等. 基于决策树和大模型的个性化计算机实验教学探索与实践[J]. 实验技术与管理, 2023, 40(12): 8-15. ZHAI J, LI Y H, MENG T X, et al. Exploration and practice of personalized computer laboratory teaching based on decision trees and large models[J]. Experimental Technology and Management, 2023, 40(12): 8-15. [73] JHAJJ G, ZHANG X K, GUSTAFSON J R, et al. Educational knowledge graph creation andAugmentation viaLLMs[C]//Proceedings of the International Conference on Intelligent Tutoring Systems. Cham: Springer, 2024: 292-304. [74] KABIR M R, LIN F. An LLM-powered adaptive practicing system[C]//Proceedings of the AIED 2023 Workshop on Empowering Education with LLMs—the Next-Gen Interface and Content Generation. Cham: Springer, 2023: 43-52. [75] 廖云燕, 石荣旦, 黄箐, 等. 基于知识思维的大模型驱动教育智能化的分析与研究[C]//2024计算思维与STEM教育研讨会暨Bebras中国社区年度工作会议论文集, 2024: 100-112. LIAO Y Y, SHI R D, HUANG Q, et al. Analysis and research on education intelligence driven by large language model based on knowledge thinking[C]//Proceedings of the 2024 Symposium on Computational Thinking and STEM Education, and the Annual Meeting of the Bebras China Community, 2024: 100-112. [76] YADAV G, TSENG Y J, NI X. Contextualizing problems to student interests at scale in intelligent tutoring system using large language models[C]//Proceedings of the AIED 2023 workshop on Empowering Education with LLMs—the Next-Gen Interface and Content Generation. Cham: Springer, 2023: 17-25. [77] JIANG Z Y, ZHONG L, SUN M S, et al. Efficient knowledge infusion via KG-LLM alignment[C]//Findings of the Association for Computational Linguistics (ACL 2024). Stroudsburg: ACL, 2024: 2986-2999. [78] 刘佳, 孙新, 张宇晴. 知识图谱与大语言模型协同的教育资源内容审查[J]. 华东师范大学学报(自然科学版), 2024(5): 57-69. LIU J, SUN X, ZHANG Y Q. Educational resource content review method based on knowledge graph and large language model collaboration[J]. Journal of East China Normal University (Natural Science), 2024(5): 57-69. [79] 余胜泉, 熊莎莎. 基于大模型增强的通用人工智能教师架构[J]. 开放教育研究, 2024, 30(1): 33-43. YU S Q, XIONG S S. General artificial intelligence teacher architecture based on enhanced pre-trained large models[J]. Open Education Research, 2024, 30(1): 33-43. [80] ZHANG T, KISHORE V, WU F, et al. BERTScore: evaluating text generation with BERT[C]//Proceedings of the 8th International Conference on Learning Representations(ICLR 2020). New York: Curran Associates, Inc, 2020: 5333-5375. [81] ZHAO W, PEYRARD M, LIU F, et al. MoverScore: text generation evaluating with contextualized embeddings and earth mover distance[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 563-578. [82] THOMPSON B, ALSHEHRI A. Improving arabic diacritization by learning to diacritize and translate[C]//Proceedings of the 19th International Conference on Spoken Language Translation(IWSLT 2022). Stroudsburg, PA: Association for Computational Linguistics, 2022: 11-21. [83] IBRAHIM N, ABOULELA S, IBRAHIM A, et al. A survey on augmenting knowledge graphs (KGs) with large language models (LLMs): models, evaluation metrics, benchmarks, and challenges[J]. Discover Artificial Intelligence, 2024, 4(1): 76. [84] PENG B, ZHU Y, LIU Y, et al. Graph retrieval-augmented generation: a survey [J]. arXiv:2408.08921, 2024. |
| [1] | 郭茂祖, 张欣欣, 赵玲玲, 张庆宇. 结构地震响应预测大语言模型[J]. 计算机工程与应用, 2025, 61(16): 132-145. |
| [2] | 韩明, 曹智轩, 王敬涛, 段丽英, 王剑宏. 基于大语言模型的企业碳排放分析与知识问答系统[J]. 计算机工程与应用, 2025, 61(16): 370-382. |
| [3] | 何鹏, 姚瑶, 刘秋菊. 时态知识图谱表示学习研究综述[J]. 计算机工程与应用, 2025, 61(14): 37-53. |
| [4] | 郑鲲,孔江萍,周晶,慈康怡,常鹏. iPPG技术及生理参数检测的教育应用综述[J]. 计算机工程与应用, 2021, 57(5): 25-35. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||