
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (7): 1-24.DOI: 10.3778/j.issn.1002-8331.2409-0300
任海玉,刘建平,王健,顾勋勋,陈曦,张越,赵昌顼
出版日期:2025-04-01
发布日期:2025-04-01
REN Haiyu, LIU Jianping, WANG Jian, GU Xunxun, CHEN Xi, ZHANG Yue, ZHAO Changxu
Online:2025-04-01
Published:2025-04-01
摘要: 智能问答是自然语言处理中的一个核心的子领域,旨在理解并回答用户提出的自然语言问题的系统。传统的问答系统通常依赖于预定义的规则和有限的语料库,无法处理复杂的多轮对话。大语言模型是一种基于深度学习技术的自然语言处理模型,拥有数十亿甚至上千亿个参数,不仅能够理解和生成自然语言,还能显著提升问答系统的准确性和效率,推动智能问答技术的发展。近年来,基于大模型技术的智能问答逐渐成为研究热点,但对该领域的系统性综述仍然较为欠缺。因此,针对大模型的智能问答系统进行系统综述,介绍了问答系统的基本概念和数据集及其评价指标;介绍了基于大模型的问答系统,其中包括基于提示学习的问答系统、基于知识图谱的问答系统、基于检索增强生成的问答系统和基于智能代理的问答系统以及微调在问答任务中的技术路线,并对比了五种方法在问答系统中的优缺点和应用场景;对于当前基于大语言模型的问答系统面临的研究挑战和未来发展趋势进行了总结。
任海玉, 刘建平, 王健, 顾勋勋, 陈曦, 张越, 赵昌顼. 基于大语言模型的智能问答系统研究综述[J]. 计算机工程与应用, 2025, 61(7): 1-24.
REN Haiyu, LIU Jianping, WANG Jian, GU Xunxun, CHEN Xi, ZHANG Yue, ZHAO Changxu. Research on Intelligent Question Answering System Based on Large Language Model[J]. Computer Engineering and Applications, 2025, 61(7): 1-24.
| [1] 姚元杰, 龚毅光, 刘佳, 等. 基于深度学习的智能问答系统综述[J]. 计算机系统应用, 2023, 32(4): 1-15. YAO Y J, GONG Y G, LIU J, et al. Survey on intelligent question answering system based on deep learning[J]. Computer Systems and Applications, 2023, 32(4): 1-15. [2] WU X Y, PENG Z Y, SAI K S R, et al. Passage-specific prompt tuning for passage reranking in question answering with large language models[J]. arXiv:2405.20654, 2024. [3] ZONG C, YAN Y C, LU W M, et al. Triad: a framework leveraging a multi-role LLM-based agent to solve knowledge base question answering[J]. arXiv:2402.14320, 2024. [4] FRISONI G, COCCHIERI A, PRESEPI A, et al. To generate or to retrieve? on the effectiveness of artificial contexts for medical open-domain question answering[J]. arXiv:2403. 01924, 2024. [5] PAN X M, SUN K, YU D, et al. Improving question answering with external knowledge[J]. arXiv:1902.00993, 2019. [6] MENSIO M, RIZZO G, MORISIO M, et al. Multi-turn QA: a RNN contextual approach to intent classification for goal-oriented systems[C]//Proceedings of the The Web Conference 2018. New York: ACM, 2018: 1075-1080. [7] 王婷, 王娜, 崔运鹏, 等. 基于人工智能大模型技术的果蔬农技知识智能问答系统[J]. 智慧农业(中英文), 2023, 5(4): 105-116. WANG T, WANG N, CUI Y P, et al. Agricultural technology knowledge intelligent question-answering system based on large language model[J]. Smart Agriculture, 2023, 5(4): 105-116. [8] WOODS W A. Lunar rocks in natural English: explorations in natural language question answering[Z]. 1977. [9] AO H, TAKAGI T. ALICE: an algorithm to extract abbreviations from MEDLINE[J]. Journal of the American Medical Informatics Association, 2005, 12(5): 576-586. [10] SHARMA Y, GUPTA S. Deep learning approaches for question answering system[J]. Procedia Computer Science, 2018, 132: 785-794. [11] LIU Y H, HAN T L, MA S Y, et al. Summary of ChatGPT-related research and perspective towards the future of large language models[J]. Meta-Radiology, 2023, 1(2): 100017. [12] HOFFMANN J, BORGEAUD S, MENSCH A, et al. Training compute-optimal large language models[J]. arXiv:2203. 15556, 2022. [13] YANG H Y, LIU X Y, DAN WANG C. FinGPT: open-source financial large language models[J]. SSRN Electronic Journal, 2023. [14] CUI J, LI Z, YAN Y, et al. Chatlaw: open-source legal large language model with integrated external knowledge bases[J]. arXiv:2306.16092, 2023. [15] 唐嘉, 庞大崴, 刘书铭, 等. 大语言模型微调训练与检索增强生成技术在油气企业制度问答应用中的效果对比研究[J]. 数字通信世界, 2024(11): 104-106. TANG J, PANG D W, LIU S M, et al. A comparative study on the effects of fine-tuning training and retrieval enhancement generation technology for large language models in the application of institutional question answering in oil and gas enterprises[J]. Digital Communication World, 2024(11): 104-106. [16] OSSOWSKI T, HU J J. Multimodal prompt retrieval for generative visual question answering[J]. arXiv:2306.17675, 2023. [17] 王喆, 杨栋梁, 况星园, 等. 考虑提示学习的洪涝灾害应急决策自动问答模型研究[J]. 中国安全生产科学技术, 2022, 18(11): 12-18. WANG Z, YANG D L, KUANG X Y, et al. Research on automatic question answering model of flood disaster emergency decision-making considering Prompt-learning[J]. Journal of Safety Science and Technology, 2022, 18(11): 12-18. [18] LU P, MISHRA S, XIA T, et al. Learn to explain: multimodal reasoning via thought chains for science question answering[C]//Advances in Neural Information Processing Systems, 2022: 2507-2521. [19] BORDES A, USUNIER N, CHOPRA S, et al. Large-scale simple question answering with memory networks[J]. arXiv:1506.02075, 2015. [20] MILLER A, FISCH A, DODGE J, et al. Key-value memory networks for directly reading documents[J]. arXiv:1606. 03126, 2016. [21] BERANT J, CHOU A, FROSTIG R, et al. Semantic parsing on freebase from question-answer pairs[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2013: 1533-1544. [22] KWIATKOWSKI T, PALOMAKI J, REDFIELD O, et al. Natural questions: a benchmark for question answering research[J]. Transactions of the Association for Computational Linguistics, 2019, 7: 453-466. [23] GU Y, KASE S E, VANNI M, et al. Beyond I.I.D.: three levels of generalization for question answering on knowledge bases[C]//Proceedings of the Web Conference 2021. New York: ACM, 2021: 3477-3488. [24] YANG Z L, QI P, ZHANG S Z, et al. HotpotQA: a dataset for diverse, explainable multi-hop question answering[J]. arXiv:1809.09600, 2018. [25] ZHANG Y Y, DAI H J, KOZAREVA Z, et al. Variational reasoning for question answering with knowledge graph[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2018. [26] BAO J, DUAN N, YAN Z, et al. Constraint-based question answering with knowledge graph[C]//Proceedings of the 26th International Conference on Computational Linguistics, 2016: 2503-2514. [27] YIH W T, RICHARDSON M, MEEK C, et al. The value of semantic parse labeling for knowledge base question answering[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2016: 201-206. [28] TALMOR A, BERANT J. The web as a knowledge-base for answering complex questions[J]. arXiv:1803.06643, 2018. [29] UNGER C, FORASCU C, LOPEZ V, et al. Question answering over linked data(QALD-4)[Z]. Working Notes for CLEF 2014 Conference, 2014. [30] CAO S L, SHI J X, PAN L M, et al. KQA Pro: a dataset with explicit compositional programs for complex question answering over knowledge base[J]. arXiv:2007.03875, 2020. [31] REDDY S, CHEN D Q, MANNING C D. CoQA: a conversational question answering challenge[J]. Transactions of the Association for Computational Linguistics, 2019, 7: 249-266. [32] CHOI E, HE H, IYYER M, et al. QuAC: question answering in context[J]. arXiv:1808.07036, 2018. [33] RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD: 100, 000+ questions for machine comprehension of text[J]. arXiv:1606.05250, 2016. [34] YANG Y, YIH W T, MEEK C. WikiQA: a challenge dataset for open-domain question answering[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2015: 2013-2018. [35] 陈俊臻, 王淑营, 罗浩然. 融合大模型微调与图神经网络的知识图谱问答[J]. 计算机工程与应用, 2024, 60(24): 166-176. CHEN J Z, WANG S Y, LUO H R. Combining large model fine-tuning and graph neural networks for knowledge graph question answering[J]. Computer Engineering and Applications, 2024, 60(24): 166-176. [36] 文森, 钱力, 胡懋地, 等. 基于大语言模型的问答技术研究进展综述[J]. 数据分析与知识发现, 2024, 8(6): 16-29. WEN S, QIAN L, HU M D, et al. Review of research progress on question-answering techniques based on large language models[J]. Data Analysis and Knowledge Discovery, 2024, 8(6): 16-29. [37] 张钦彤, 王昱超, 王鹤羲, 等. 大语言模型微调技术的研究综述[J]. 计算机工程与应用, 2024, 60(17): 17-33. ZHANG Q T, WANG Y C, WANG H X, et al. A survey of fine-tuning techniques for large language models[J]. Computer Engineering and Applications, 2024, 60(17): 17-33. [38] MALLADI S, GAO T Y, NICHANI E, et al. Fine-tuning language models with just forward passes[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems. New York: ACM, 2023: 53038-53075. [39] LV K, YANG Y Q, LIU T X, et al. Full parameter fine-tuning for large language models with limited resources[J]. arXiv:2306.09782, 2023. [40] TANWISUTH K, ZHANG S, ZHENG H, et al. POUF: prompt-oriented unsupervised fine-tuning for large pre-trained models[C]//Proceedings of the International Conference on Machine Learning, 2023: 33816-33832. [41] HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter-efficient transfer learning for NLP[C]//Proceedings of the International Conference on Machine Learning, 2019: 2790-2799. [42] LESTER B, AL-RFOU R, CONSTANT N, et al. The power of scale for parameter-efficient prompt tuning[J]. arXiv: 2104.08691, 2021. [43] DETTMERS T, PAGNONI A, HOLTZMAN A, et al. QLoRA: efficient finetuning of quantized LLMs[C]//Advances in Neural Information Processing Systems, 2024. [44] ZHANG Q R, CHEN M S, BUKHARIN A, et al. AdaLoRA: adaptive budget allocation for parameter-efficient fine-tuning[J]. arXiv:2303.10512, 2023. [45] LIN X Y, WANG W J, LI Y Q, et al. Data-efficient fine-tuning for LLM-based recommendation[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 365-374. [46] HAN G X, LIM S N. Few-shot object detection with foundation models[C]//Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 28608-28618. [47] LI Z Y, FAN S Q, GU Y, et al. FlexKBQA: a flexible LLM-powered framework for few-shot knowledge base question answering[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(17): 18608-18616. [48] MAO K L, LIU Z, QIAN H J, et al. RAG-studio: towards in-domain adaptation of retrieval augmented generation through self-alignment[C]//Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg: ACL, 2024: 725-735. [49] SIRIWARDHANA S, WEERASEKERA R, WEN E, et al. Improving the domain adaptation of retrieval augmented generation(RAG) models for open domain question answering[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1-17. [50] MO T J, XIAO Q, ZHANG H Y, et al. Domain-specific few-shot table prompt question answering via contrastive exemplar selection[J]. Algorithms, 2024, 17(7): 278. [51] WANG L L, YU K, WUMAIER A, et al. Genre: generative multi-turn question answering with contrastive learning for entity relation extraction[J]. Complex & Intelligent Systems, 2024, 10(3): 3429-3443. [52] JUNG M, PARK S, SUL J, et al. Is prompt transfer always effective? an empirical study of prompt transfer for question answering[C]//Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2024: 528-539. [53] LIU Y H, WEI W, PENG D W, et al. Declaration-based prompt tuning for visual question answering[J]. arXiv: 2205. 02456, 2022. [54] LIU H H, QIN Y. Heterogeneous graph prompt for community question answering[J]. Concurrency and Computation: Practice and Experience, 2022: e7156. [55] BAEK J, AJI A F, SAFFARI A. Knowledge-augmented language model prompting for zero-shot knowledge graph question answering[C]//Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations. Stroudsburg: ACL, 2023: 78-106. [56] CUI C H, LI Z J. Prompt-enhanced generation for multimodal open question answering[J]. Electronics, 2024, 13(8): 1434. [57] ZHANG Y, FAN W S, PENG P X, et al. Dual modality prompt learning for visual question-grounded answering in robotic surgery[J]. Visual Computing for Industry, Biomedicine, and Art, 2024, 7(1): 9. [58] ZHONG W J, GAO Y F, DING N, et al. ProQA: structural prompt-based pre-training for unified question answering[J]. arXiv:2205.04040, 2022. [59] MA Z Y, YU Z H, LI J J, et al. HybridPrompt: bridging language models and human priors in prompt tuning for visual question answering[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(11): 13371-13379. [60] LAZARIDOU A, GRIBOVSKAYA E, STOKOWIEC W, et al. Internet-augmented language models through few-shot prompting for open-domain question answering[J]. arXiv: 2203. 05115, 2022. [61] MITRA C, MIROYAN M, JAIN R, et al. RetLLM-E: retrieval-prompt strategy for question-answering on student discussion forums[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(21): 23215-23223. [62] LIU Y H, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019. [63] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21(1): 5485-5551. [64] PETRONI F, ROCKT?SCHEL T, LEWIS P, et al. Language models as knowledge bases? [J]. arXiv:1909.01066, 2019. [65] ZHONG L F, WU J, LI Q, et al. A comprehensive survey on automatic knowledge graph construction[J]. ACM Computing Surveys, 2023, 56(4): 1-62. [66] WANG X Z, GAO T Y, ZHU Z C, et al. KEPLER: a unified model for knowledge embedding and pre-trained language representation[J]. Transactions of the Association for Computational Linguistics, 2021, 9: 176-194. [67] SUN J S, XU C J, TANG L, et al. Think-on-graph: deep and responsible reasoning of large language model on knowledge graph[J]. arXiv:2307.07697, 2023. [68] TAN X Y, WANG H Y, QIU X H, et al. Struct-X: enhancing large language models reasoning with structured data[J]. arXiv:2407.12522, 2024. [69] WANG Y Q, JIANG B R, LUO Y, et al. Reasoning on efficient knowledge paths: knowledge graph guides large language model for domain question answering[J]. arXiv:2404.10384, 2024. [70] GUO T Z, YANG Q W, WANG C, et al. KnowledgeNavigator: leveraging large language models for enhanced reasoning over knowledge graph[J]. Complex & Intelligent Systems, 2024, 10(5): 7063-7076. [71] JIANG J H, ZHOU K, DONG Z C, et al. StructGPT: a general framework for large language model to reason over structured data[J]. arXiv:2305.09645, 2023. [72] AGRAWAL G, PAL K, DENG Y L, et al. CyberQ: generating questions and answers for cybersecurity education using knowledge graph-augmented LLMs[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38(21): 23164-23172. [73] ZHANG Q G, DONG J N, CHEN H, et al. KnowGPT: knowledge graph based prompting for large language models[J]. arXiv:2312.06185, 2023. [74] WANG J J, CHEN M Y, HU B B, et al. Learning to plan for retrieval-augmented large language models from knowledge graphs[J]. arXiv:2406.14282, 2024. [75] WANG F L, BAO R X, WANG S H, et al. InfuserKI: enhancing large language models with knowledge graphs via infuser-guided knowledge integration[J]. arXiv:2402. 11441, 2024. [76] MOISEEV F, DONG Z, ALFONSECA E, et al. SKILL: structured knowledge infusion for large language models[J]. arXiv:2205.08184, 2022. [77] XIA T L, DING L, WAN G J, et al. Improving complex reasoning over knowledge graph with logic-aware curriculum tuning[J]. arXiv:2405.01649, 2024. [78] ZHANG Y C, CHEN Z, FANG Y, et al. Knowledgeable preference alignment for LLMs in domain-specific question answering[J]. arXiv:2311.06503, 2023. [79] GAO Y F, QIAO L B, KAN Z G, et al. Two-stage generative question answering on temporal knowledge graph using large language models[J]. arXiv:2402.16568, 2024. [80] ZHANG Y C, CHEN Z, GUO L B, et al. Making large language models perform better in knowledge graph completion[C]//Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024: 233-242. [81] AGARWAL D, DAS R, KHOSLA S, et al. Bring your own KG: self-supervised program synthesis for zero-shot KGQA[J]. arXiv:2311.07850, 2023. [82] LUO H R, HAIHONG E, TANG Z C, et al. ChatKBQA: a generate-then-retrieve framework for knowledge base question answering with fine-tuned large language models[J]. arXiv:2310.08975, 2023. [83] FENG C, ZHANG X Y, FEI Z C. Knowledge solver: teaching LLMs to search for domain knowledge from knowledge graphs[J]. arXiv:2309.03118, 2023. [84] WANG H C, ZHAO S D, QIANG Z W, et al. Knowledge-tuning large language models with structured medical knowledge bases for reliable response generation in Chinese[J]. arXiv:2309.04175, 2023. [85] ZHOU T, CHEN Y B, LIU K, et al. CogMG: collaborative augmentation between large language model and knowledge graph[J]. arXiv:2406.17231, 2024. [86] WEN Y L, WANG Z F, SUN J M. MindMap: knowledge graph prompting Sparks graph of thoughts in large language models[J]. arXiv:2308.09729, 2023. [87] GAO Y F, XIONG Y, GAO X Y, et al. Retrieval-augmented generation for large language models: a survey[J]. arXiv: 2312.10997, 2023. [88] SHI W J, MIN S, YASUNAGA M, et al. REPLUG: retrieval-augmented black-box language models[J].?arXiv:2301.12652, 2023. [89] RAM O, LEVINE Y, DALMEDIGOS I, et al. In-context retrieval-augmented language models[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1316-1331. [90] YU W H, ITER D, WANG S H, et al. Generate rather than retrieve: large language models are strong context generators[J]. arXiv:2209.10063, 2022. [91] SUN Z Q, WANG X Z, TAY Y, et al. Recitation-augmented language models[J]. arXiv:2210.01296, 2022. [92] FENG Z Y, FENG X C, ZHAO D Z, et al. Retrieval-generation synergy augmented large language models[C]//Proceedings of the 2024 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2024: 11661-11665. [93] SHAO Z H, GONG Y Y, SHEN Y L, et al. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy[J]. arXiv:2305.15294, 2023. [94] HEI Z J, LIU W L, OU W J, et al. DR-RAG: applying dyn- amic document relevance to retrieval-augmented generation for question-answering[J]. arXiv:2406.07348, 2024. [95] HE X X, TIAN Y J, SUN Y F, et al. G-retriever: retrieval-augmented generation for textual graph understanding and question answering[J]. arXiv:2402.07630, 2024. [96] YANG D J, RAO J M, CHEN K Z, et al. IM-RAG: multi-round retrieval-augmented generation through learning inner monologues[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 730-740. [97] ROY K, ZI Y, SHYALIKA C, et al. QA-RAG: leveraging question and answer-based retrieved chunk re-formatting for improving response quality during retrieval-augmented generation[Z]. 2024. [98] WU R D, CHEN S H, SU X B, et al. A multi-source retrieval question answering framework based on RAG[J]. arXiv:2405.19207, 2024. [99] YE L H, LEI Z K, YIN J H, et al. Boosting conversational question answering with fine-grained retrieval-augmentation and self-check[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 2301-2305. [100] MA X B, GONG Y Y, HE P C, et al. Query rewriting for retrieval-augmented large language models[J]. arXiv:2305. 14283, 2023. [101] ZOU W, GENG R P, WANG B H, et al. PoisonedRAG: knowledge corruption attacks to retrieval-augmented generation of large language models[J]. arXiv:2402.07867, 2024. [102] PAN F F, CANIM M, GLASS M, et al. End-to-end table question answering via retrieval-augmented generation[J]. arXiv:2203.16714, 2022. [103] MAO Y R, DONG X M, XU W Y, et al. FIT-RAG: black-box RAG with factual information and token reduction[J]. arXiv:2403.14374, 2024. [104] SU W H, TANG Y C, AI Q Y, et al. DRAGIN: dynamic retrieval augmented generation based on the information needs of large language models[J]. arXiv:2403.10081, 2024. [105] CHEN W H, HU H X, CHEN X, et al. MuRAG: multimodal retrieval-augmented generator for open question answering over images and text[J]. arXiv:2210.02928, 2022. [106] FATEHKIA M, LUCAS J K, CHAWLA S. T-RAG: lessons from the LLM trenches[J]. arXiv:2402.07483, 2024. [107] LEVONIAN Z, LI C L, ZHU W D, et al. Retrieval- augmented generation to improve math question-answering: trade-offs between groundedness and human preference[J]. arXiv:2310.03184, 2023. [108] ALAWWAD H A, ALHOTHALI A, NASEEM U, et al. Enhancing textual textbook question answering with large language models and retrieval augmented generation[J]. arXiv:2402.05128, 2024. [109] MAO Y N, HE P C, LIU X D, et al. Generation-augmented retrieval for open-domain question answering[J]. arXiv:2009.08553, 2020. [110] RAVI R, GINDE G, ROKNE J. PRAGyan: connecting the dots in tweets[J]. arXiv:2407.13909, 2024. [111] GUTIéRREZ B J, SHU Y H, GU Y, et al. HippoRAG: neurobiologically inspired long-term memory for large language models[J]. arXiv:2405.14831, 2024. [112] XI Z H, CHEN W X, GUO X, et al. The rise and potential of large language model based agents: a survey[J]. arXiv: 2309.07864, 2023. [113] NIU C, WANG X G, CHENG X X, et al. Enhancing dialogue state tracking models through LLM-backed user-agents simulation[J]. arXiv:2405.13037, 2024. [114] XU Y, HE S Z, CHEN J B, et al. Generate-on-graph: treat LLM as both agent and KG in incomplete knowledge graph question answering[J]. arXiv:2404.14741, 2024. [115] LIU N, CHEN L Y, TIAN X Y, et al. From LLM to conversational agent: a memory enhanced architecture with fine-tuning of large language models[J]. arXiv:2401.02777, 2024. [116] ZHANG S Y, DONG Y J, ZHANG Y C, et al. Large language model assissted multi-agent dialogue for ontology alignment[C]//Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems. New York: ACM, 2024: 2594-2596. [117] PATEL B, DORBALA V S, BEDI A S. Embodied question answering via multi-LLM systems[J]. arXiv:2406.10918, 2024. [118] WANG K, LU Y, SANTACROCE M, et al. Adapting LLM agents with universal feedback in communication[C]//Proceedings of the ICML 2024 Workshop on Foundation Models in the Wild, 2024. |
| [1] | 刘海超, 柳林, 王海龙, 赵巍伟, 刘静. 知识图谱嵌入方法的链接预测研究综述[J]. 计算机工程与应用, 2025, 61(8): 17-34. |
| [2] | 温浩, 杨洋. 融合ERNIE与知识增强的临床短文本分类研究[J]. 计算机工程与应用, 2025, 61(8): 108-116. |
| [3] | 贾莉, 马廷淮, 桑晨扬, 潘倩. 融合知识和语义信息的双编码器自动摘要模型[J]. 计算机工程与应用, 2025, 61(7): 213-221. |
| [4] | 田侃, 曹新汶, 张浩然, 先兴平, 吴涛, 宋秀丽. 结合图卷积模型和共享编码的知识图谱问答方法[J]. 计算机工程与应用, 2025, 61(7): 233-244. |
| [5] | 孙宇, 刘川, 周扬. 深度学习在知识图谱构建及推理中的应用[J]. 计算机工程与应用, 2025, 61(6): 36-52. |
| [6] | 王敬凯, 秦董洪, 白凤波, 李路路, 孔令儒, 徐晨. 语音识别与大语言模型融合技术研究综述[J]. 计算机工程与应用, 2025, 61(6): 53-63. |
| [7] | 陶江垚, 奚雪峰, 盛胜利, 崔志明, 左严. 结构化思维提示增强大语言模型推理能力综述[J]. 计算机工程与应用, 2025, 61(6): 64-83. |
| [8] | 李敏, 李学俊, 廖竞. 融合多层次卷积神经网络的知识图谱嵌入模型[J]. 计算机工程与应用, 2025, 61(6): 192-198. |
| [9] | 江双五, 张嘉玮, 华连生, 杨菁林. 基于大模型检索增强生成的气象数据库问答模型实现[J]. 计算机工程与应用, 2025, 61(5): 113-121. |
| [10] | 肖宇, 肖菁, 林桂锦, 倪荣森, 冼嘉荣, 袁基保. 可解释性逻辑推理数据集的构建和研究[J]. 计算机工程与应用, 2025, 61(4): 114-121. |
| [11] | 苑中旭, 李理, 何凡, 杨秀, 韩东轩. 融合思维链与知识图谱的中医问答模型[J]. 计算机工程与应用, 2025, 61(4): 158-166. |
| [12] | 李玥, 洪海蓝, 李文林, 杨涛. 大语言模型构建鼻炎医案知识图谱的应用研究[J]. 计算机工程与应用, 2025, 61(4): 167-175. |
| [13] | 张锴, 贾涛. 结合知识图谱和小目标改进的RCNN电力杆塔部件识别方法[J]. 计算机工程与应用, 2025, 61(4): 299-309. |
| [14] | 王新蕾, 王硕, 翟嘉政, 肖瑞林, 廖晨旭. 多任务联合学习下的复杂天气航拍图像目标检测算法[J]. 计算机工程与应用, 2025, 61(2): 97-111. |
| [15] | 黄施洋, 奚雪峰, 崔志明. 大模型时代下的汉语自然语言处理研究与探索[J]. 计算机工程与应用, 2025, 61(1): 80-97. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||