[1] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[2] ZENG A H, LIU X, DU Z H, et al. GLM130B: an open bilingual pre-trained model[C]//Proceedings of the 2023 International Conference on Learning Representations, 2023.
[3] WANG H, LIU C, XI N, et al. HuaTuo: tuning LLaMA model with Chinese medical knowledge[J]. arXiv:2304.06975, 2023.
[4] WANG N, YANG H, WANG C D. FinGPT: instruction tuning benchmark for open-source large language models in financial datasets[J]. arXiv:2310.04793, 2023.
[5] 代佳梅, 孔韦韦, 王泽, 等. BERT和LSI的端到端方面级情感分析模型[J]. 计算机工程与应用, 2024, 60(12): 144-152.
DAI J M, KONG W W, WANG Z, et al. End-to-end aspect-based sentiment analysis model for BERT and LSI[J]. Computer Engineering and Applications, 2024, 60(12): 144-152.
[6] KRALJEVIC Z, SHEK A, BEAN D, et al. MedGPT: medical concept prediction from clinical narratives[J]. arXiv:2107.03134, 2021.
[7] MAYNEZ J, NARAYAN S, BOHNET B, et al. On faithfulness and factuality in abstractive summarization[J]. arXiv:2005.00661, 2020.
[8] SHUSTER K, POFF S, CHEN M, et al. Retrieval augmentation reduces hallucination in conversation[J]. arXiv:2104.07567, 2021.
[9] AGARWAL O, GE H, SHAKERI S, et al. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training[J]. arXiv:2010.12688, 2020.
[10] 张鹤译, 王鑫, 韩立帆, 等. 大语言模型融合知识图谱的问答系统研究[J]. 计算机科学与探索, 2023, 17(10): 2377-2388.
ZHANG H Y, WANG X, HAN L F, et al. Research on question answering system on joint of knowledge graph and large language models[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(10): 2377-2388.
[11] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. arXiv:1706.03762, 2017.
[12] TEAM G, ANIL R, BORGEAUD S, et al. Gemini: a family of highly capable multimodal models[J]. arXiv:2312.11805, 2023.
[13] SHUSTER K, XU J, KOMEILI M, et al. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage[J]. arXiv:2208.03188, 2022.
[14] LI Y, LI Z, ZHANG K, et al. ChatDoctor: a medical chat model fine-tuned on a large language model meta-AI (LLaMA) using medical domain knowledge[J]. arXiv:2303.14070, 2023.
[15] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[J]. arXiv:2302.13971, 2023.
[16] XIONG H, WANG S, ZHU Y, et al. DoctorGLM: fine-tuning your Chinese doctor is not a Herculean task[J]. arXiv:2304.
01097, 2023.
[17] WU C, ZHANG X, ZHANG Y, et al. PMC-LLaMA: further finetuning LLaMA on medical papers[J]. arXiv:2304.14454, 2023.
[18] WEI J, WANG X, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]//Advances in Neural Information Processing Systems 35, 2022: 24824-24837.
[19] WANG J, SUN Q, LI X, et al. Boosting language models reasoning with chain-of-knowledge prompting[J]. arXiv:2306.06427, 2023.
[20] TRIVEDI H, BALASUBRAMANIAN N, KHOT T, et al. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions[J]. arXiv:2212.
10509, 2022.
[21] NIU M, LI H, SHI J, et al. Mitigating hallucinations in large language models via self-refinement-enhanced knowledge retrieval[J]. arXiv:2405.06545, 2024.
[22] ZHOU T, CHEN Y, LIU K, et al. CogMG: collaborative augmentation between large language model and knowledge graph[J]. arXiv:2406.17231, 2024.
[23] WANG J, CHEN M, HU B, et al. Learning to plan for retrieval-augmented large language models from knowledge graphs[C]//Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg: ACL, 2024.
[24] KANG Y L, CHANG Y, FU J Y, et al. CMLM-ZhongJing: large language model is GoodStory listener[EB/OL]. [2024-07-12]. https://github.com/pariskang/CMLM-ZhongJing.
[25] 张君冬, 杨松桦. HuangDi: 中医古籍生成式大语言模型的构建研究[EB/OL]. [2024-07-12]. https://github.com/Zlasejd/HuangDi.
ZHANG J D, YANG S H. HuangDi: research on the construction of generative large language model for ancient Chinese medicine books[EB/OL]. [2024-07-12]. https://github.com/Zlasejd/HuangDi.
[26] TAN Y, LI M, HUANG Z, et al. MedChatZH: a better medical adviser learns from better instructions[J]. arXiv:2309.01114, 2023.
[27] ZHANG N, JIA Q, YIN K, et al. Conceptualized representation learning for Chinese biomedical text mining[J]. arXiv:2008.10813, 2020.
[28] 奥德玛, 杨云飞, 穗志方, 等.?中文医学知识图谱CMeKG构建初探[J].?中文信息学报, 2019, 33(10): 1-7.
ODMAA, YANG Y F, SUI Z F, et al.?Preliminary study on the construction of Chinese medical knowledge graph[J].?Journal of Chinese Information Processing, 2019, 33(10): 1-7.
[29] ROBERTSON S, ZARAGOZA H. The probabilistic relevance framework: BM25 and beyond[J]. Foundations and Trends in Information Retrieval, 2009, 3(4): 333-389.
[30] LIU X, JI K, FU Y, et al. P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[J]. arXiv:2110.07602, 2021.
[31] HU E J, SHEN Y, WALLIS P, et al. LoRA: low-rank adaptation of large language models[J]. arXiv:2106.09685, 2021.
[32] OUYANG L, WU J, XU J, et al. Training language models to follow instructions with human feedback[C]//Advances in Neural Information Processing Systems 35, 2022: 27730-27744.
[33] ZHANG H, CHEN J, JIANG F, et al. HuatuoGPT, towards taming language model to be a doctor[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 10859-10885.
[34] WANG X, CHEN G, SONG D, et al. CMB: a comprehensive medical benchmark in Chinese[C]//Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2024: 6184-6205. |