
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (22): 36-54.DOI: 10.3778/j.issn.1002-8331.2501-0086
马潇,田永红,赵伟
出版日期:2025-11-15
发布日期:2025-11-14
MA Xiao, TIAN Yonghong, ZHAO Wei
Online:2025-11-15
Published:2025-11-14
摘要: 随着计算机技术的进步,机器翻译已成为实现跨语言沟通的关键工具,其发展历程可分为基于规则的机器翻译、基于统计的机器翻译以及基于深度学习的神经机器翻译。聚焦于大语言模型在翻译领域的应用与创新,全面回顾并系统性梳理了神经机器翻译(neural machine translation,NMT)的最新进展。从早期的循环神经网络到卷积神经网络,再到当前广泛应用的Transformer模型及其变体,概述了机器翻译的演进历程,分析了NMT的主流架构发展。深入剖析了大语言模型翻译的三个关键维度,系统比较了全参数微调与高效参数微调等技术在翻译任务上的差异性表现;详细探讨了多语言大模型翻译技术、零样本与少样本跨语言迁移的技术挑战与解决方案;全面综述了知识图谱增强、领域专业知识融合及多模态知识融合的大模型翻译方法;介绍了机器翻译的评价指标与常用数据集,并对低资源语言翻译提升、可解释与可控翻译系统、跨文化适应性翻译、计算资源优化以及隐私保护与安全可控等方向的研究前景进行了展望。
马潇, 田永红, 赵伟. 基于神经网络的机器翻译研究综述[J]. 计算机工程与应用, 2025, 61(22): 36-54.
MA Xiao, TIAN Yonghong, ZHAO Wei. Review of Neural Network-Based Machine Translation Research[J]. Computer Engineering and Applications, 2025, 61(22): 36-54.
| [1] HOPFIELD J J. Neural networks and physical systems with emergent collective computational abilities[J]. Proceedings of the National Academy of Sciences of the United States of America, 1982, 79(8): 2554-2558. [2] RUMELHART D E, HINTON G E, WILLIAMS R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088): 533-536. [3] ELMAN J L. Finding structure in time[J]. Cognitive Science, 1990, 14(2): 179-211. [4] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. [5] CHUNG J, GULCEHRE C, CHO K, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[J]. arXiv:1412.3555, 2014. [6] LECUN Y, BOSER B, DENKER J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4): 541-551. [7] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [8] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [9] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409.1556, 2014. [10] SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1-9. [11] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. [12] SERCU T, GOEL V. Dense prediction on sequences with time-dilated convolutions for speech recognition[J]. arXiv: 1611. 09288, 2016. [13] LEE J, CHO K, HOFMANN T. Fully character-level neural machine translation without explicit segmentation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 365-378. [14] GEHRING J, AULI M, GRANGIER D, et al. Convolutional sequence to sequence learning[C]//Proceedings of the 34th International Conference on Machine Learning, 2017: 1243-1252. [15] VASWANI A. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017: 6000-6010. [16] AHMED K, KESKAR N S, SOCHER R. Weighted transformer network for machine translation[J]. arXiv:1711.02132, 2017. [17] RAGANATO A, TIEDEMANN J. An analysis of encoder representations in transformer-based machine translation[C]//Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Stroudsburg: ACL, 2018: 287-297. [18] WANG Q, LI B, XIAO T, et al. Learning deep transformer models for machine translation[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 1810-1822. [19] RAFFEL C, SHAZEER N M, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2019, 21: 140. [20] WEI J, BOSMA M, ZHAO V Y, et al. Finetuned language models are zero-shot learners[J]. arXiv:2109.01652, 2021. [21] AHARONI R, JOHNSON M, FIRAT O. Massively multi-lingual neural machine translation[J]. arXiv:1903.00089, 2019. [22] DING N, QIN Y J, YANG G, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models[J]. Nature Machine Intelligence, 2023, 5(3): 220-235. [23] PHANG J, MAO Y, HE P C, et al. HyperTuning: toward adap-ting large language models without back-propagation[C]//Proceedings of the 40th International Conference on Machine Learning, 2023: 27854-27875. [24] SUN K, LI Z, ZHAO H. Multilingual pre-training with universal dependency learning[C]//Advances in Neural Information Processing Systems 34, 2021: 8444-8456. [25] TOUVRON H, MARTIN L, STONE K, et al. Llama 2: open foundation and fine-tuned chat models[J]. arXiv:2307.09288, 2023. [26] CHOWDHERY A, NARANG S, DEVLIN J, et al. PaLM: scaling language modeling with pathways[J]. Journal of Mac-hine Learning Research, 2023, 24: 240. [27] YAO Z W, AMINABADI R Y, RUWASE O, et al. DeepSpeed-Chat: easy, fast and affordable RLHF training of ChatGPT-like models at all scales[J]. arXiv:2308.01320, 2023. [28] YAN W, OKUMURA R, JAMES S, et al. Patch-based object-centric transformers for efficient video generation[J]. arXiv: 2206.04003, 2022. [29] WERRA L, BELKADA Y, TUNSTALL L, et al. TRL: transformer reinforcement learning[J]. Journal of Machine Learning Applications, 2020, 8(4): 237-252. [30] OUYANG L, WU J, JIANG X, et al. Training language models to follow instructions with human feedback[C]//Advances in Neural Information Processing Systems 35, 2022: 27730-27744. [31] KALAI A T, VEMPALA S S. Calibrated language models must hallucinate[C]//Proceedings of the 56th Annual ACM Symposium on Theory of Computing. New York: ACM, 2024: 160-171. [32] BAI Y T, KADAVATH S, KUNDU S, et al. Constitutional AI: harmlessness from AI feedback[J]. arXiv:2212.08073, 2022. [33] DENG Y, LIAO L Z, CHEN L, et al. Prompting and evaluating large language models for proactive dialogues: clarification, target-guided, and non-collaboration[J]. arXiv:2305.13626, 2023. [34] BLACK S, BIDERMAN S, HALLAHAN E, et al. GPT-NeoX-20B: an open-source autoregressive language model[J]. arXiv:2204.06745, 2022. [35] XU L L, XIE H R, QIN S J, et al. Parameter-efficient fine-tuning methods for pretrained language models: a critical review and assessment[J]. arXiv:2312.12148, 2023. [36] HE J X, ZHOU C T, MA X Z, et al. Towards a unified view of parameter-efficient transfer learning[J]. arXiv:2110.04366, 2021. [37] HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter-efficient transfer learning for NLP[C]//Proceedings of the 36th International Conference on Machine Learning, 2019: 2790-2799. [38] PFEIFFER J, KAMATH A, RüCKLé A, et al. AdapterFusion: non-destructive task composition for transfer learning[J]. arXiv:2005.00247, 2020. [39] RüCKLé A, GEIGLE G, GLOCKNER M, et al. AdapterDrop: on the efficiency of adapters in transformers[J]. arXiv:2010.11918, 2020. [40] CHRONOPOULOU A, PETERS M E, FRASER A, et al. Adapter- Soup: weight averaging to improve generalization of pretrained language models[J]. arXiv:2302.07027, 2023. [41] HU E J, SHEN Y L, WALLIS P, et al. LoRA: low-rank adapt-ation of large language models[C]//Proceedings of the 10th International Conference on Learning Representations, 2022. [42] CHEN Y K, QIAN S J, TANG H T, et al. LongLoRA: efficient fine-tuning of long-context large language models[J]. arXiv:2309.12307, 2023. [43] ZHANG Q R, CHEN M S, BUKHARIN A, et al. AdaLoRA: adaptive budget allocation for parameter-efficient fine-tuning[J]. arXiv:2303.10512, 2023. [44] LIU S Y, WANG C Y, YIN H, et al. DoRA: weight-decomposed low-rank adaptation[C]//Proceedings of the 41st International Conference on Machine Learning, 2024: 2643-2675. [45] LI X L, LIANG P. Prefix-tuning: optimizing continuous prompts for generation[J]. arXiv:2101.00190, 2021. [46] GU Y X, HAN X, LIU Z Y, et al. PPT: pre-trained prompt tuning for few-shot learning[J]. arXiv:2109.04332, 2021. [47] ZAKEN E B, RAVFOGEL S, GOLDBERG Y. BitFit: simple parameter-efficient fine-tuning for transformer-based masked language-models[J]. arXiv:2106.10199, 2021. [48] SUNG Y L, NAIR V, RAFFEL C. Training neural networks with fixed sparse masks[C]//Advances in Neural Information Processing Systems 34, 2021: 24193-24205. [49] DING N, QIN Y J, YANG G, et al. Delta tuning: a comprehensive study of parameter efficient methods for pre-trained language models[J]. arXiv:2203.06904, 2022. [50] CHEN J A, ZHANG A, SHI X J, et al. Parameter-efficient fine-tuning design spaces[J]. arXiv:2301.01821, 2023. [51] YAO Z, LI C, WU X, et al. A comprehensive study on post-training quantization for large language models[J]. arXiv:2303.08302, 2023. [52] HAN Z Y, GAO C, LIU J Y, et al. Parameter-efficient fine-tuning for large models: a comprehensive survey[J]. arXiv:2403.14608, 2024. [53] ZHOU H, WAN X C, VULI? I, et al. AutoPEFT: automatic configuration search for parameter-efficient fine-tuning[J]. Transactions of the Association for Computational Linguistics, 2024, 12: 525-542. [54] FENG F, YANG Y F, CER D, et al. Language-agnostic BERT sentence embedding[J]. arXiv:2007.01852, 2020. [55] LIANG D, GONEN H, MAO Y N, et al. XLM-V: overcoming the vocabulary bottleneck in multilingual masked language models[J]. arXiv:2301.10472, 2023. [56] WONGSO W, JOYOADIKUSUMO A, BUANA B S, et al. Many-to-many multilingual translation model for languages of Indonesia[J]. IEEE Access, 2023, 11: 91385-91397. [57] LE SCAO T, FAN A, AKIKI C, et al. BLOOM: a 176B-parameter open-access multilingual language model[J]. arXiv: 2211.05100, 2022. [58] JIANG H Q, WU Q H, LUO X F, et al. LongLLMLingua: accelerating and enhancing LLMs in long context scenarios via prompt compression[J]. arXiv:2310.06839, 2023. [59] KAMBHATLA N, BORN L, SARKAR A. Learning nearest neighbour informed latent word embeddings to improve zero-shot machine translation[C]//Proceedings of the 20th International Conference on Spoken Language Translation. Stroudsburg: ACL, 2023: 291-301. [60] GAO P Z, ZHANG L W, HE Z J, et al. Improving zero-shot multilingual neural machine translation by leveraging cross-lingual consistency regularization[J]. arXiv:2305.07310, 2023. [61] HAN L F, EROFEEV G, SOROKINA I, et al. Using massive multilingual pre-trained language models towards real zero-shot neural machine translation in clinical domain[J]. arXiv:2210.06068, 2022. [62] CHEN G H, MA S M, CHEN Y, et al. Towards making the most of cross-lingual transfer for zero-shot neural machine translation[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2022: 142-157. [63] DUAN X Y, YIN M M, ZHANG M, et al. Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 3162-3172. [64] NOORALAHZADEH F, BEKOULIS G, BJERVA J, et al. Zero-shot cross-lingual transfer with meta learning[J]. arXiv:2003.02739, 2020. [65] MOZAFARI M, FARAHBAKHSH R, CRESPI N. Cross-lingual few-shot hate speech and offensive language detection using meta learning[J]. IEEE Access, 2022, 10: 14880-14896. [66] SHERBORNE T, LAPATA M. Meta-learning a cross-lingual manifold for semantic parsing[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 49-67. [67] GUPTA S, MATSUBARA Y, CHADHA A, et al. Cross-lingual knowledge distillation for answer sentence selection in low-resource languages[J]. arXiv:2305.16302, 2023. [68] CARRINO C P, ESCOLANO C, FONOLLOSA J A R. Promoting generalized cross-lingual question answering in few-resource scenarios via self-knowledge distillation[J]. arXiv:2309.17134, 2023. [69] SALEH F, BUNTINE W, HAFFARI G. Collective wisdom: improving low-resource neural machine translation using adaptive knowledge distillation[J]. arXiv:2010.05445, 2020. [70] SONG Y W, EZZINI S, KLEIN J, et al. Letz translate: low-resource machine translation for luxembourgish[C]//Proceedings of the 2023 5th International Conference on Natural Language Processing. Piscataway: IEEE, 2023: 165-170. [71] PATEL R, FERRARO F. On the complementary nature of know-ledge graph embedding, fine grain entity types, and language modeling[J]. arXiv:2010.05732, 2020. [72] WANG R, ZHAO H, PLOUX S, et al. Graph-based bilingual word embedding for statistical machine translation[J]. ACM Transactions on Asian and Low-Resource Language Information Processing, 2018, 17(4): 1-23. [73] CHEN M H, TIAN Y T, YANG M H, et al. Multilingual know-ledge graph embeddings for cross-lingual knowledge alignment[J]. arXiv:1611.03954, 2016. [74] XU Z N, SHOU L J, PEI J, et al. A graph fusion approach for cross-lingual machine reading comprehension[J]. Proc-eedings of the AAAI Conference on Artificial Intelligence, 2023, 37(11): 13861-13868. [75] WU D, MONZ C. Beyond shared vocabulary: increasing representational word similarities across languages for multi-lingual machine translation[J]. arXiv:2305.14189, 2023. [76] HE Z W, LIANG T, JIAO W X, et al. Exploring human-like translation strategy with large language models[J]. Transactions of the Association for Computational Linguistics, 2024, 12: 229-246. [77] LU C S. Performance analysis of attention mechanism and teacher forcing ratio based on machine translation[J]. Journal of Physics: Conference Series, 2023, 2580: 012006. [78] TYAGI N, SARKAR S, GAUR M. Leveraging knowledge and reinforcement learning for enhanced reliability of language models[C]//Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. New York: ACM, 2023: 4320-4324. [79] GOMATHI R D, PUNITHAASREE K S, VIJAYA K S, et al. Implementation of reinforcement learning algorithm in the development of English language learning skills[J]. Rupkatha Journal on Interdisciplinary Studies in Humanities, 2024, 16(2). [80] FEI H, LIU Q, ZHANG M S, et al. Scene graph as pivoting: inference-time image-free unsupervised multimodal machine translation with visual scene hallucination[J]. arXiv:2305. 12256, 2023. [81] ZHENG J B, LI S Y, TAN C, et al. Leveraging graph-based cross-modal information fusion for neural sign language translation[J]. arXiv:2211.00526, 2022. [82] SAN M E, USANAVASIN S, THU Y K, et al. A study for enhancing low-resource Thai-Myanmar-English neural machine translation[J]. ACM Transactions on Asian and Low-Resource Language Information Processing, 2024, 23(4): 1-24. [83] YAN J, LIN T, ZHAO S. Migration learning and multi-view training for low-resource machine translation[J]. International Journal of Advanced Computer Science and Applications, 2024, 15(5): 0150572. [84] BOGOYCHEV N, CHEN P Z. Terminology-aware translation with constrained decoding and large language model prompting[J]. arXiv:2310.05824, 2023. [85] JUNCZYS-DOWMUNT M, GRUNDKIEWICZ R, GUHA S, et al. Approaching neural grammatical error correction as a low-resource machine translation task[J]. arXiv:1804.05940, 2018. [86] GHAZVININEJAD M, GONEN H, ZETTLEMOYER L. Dictionary-based phrase-level prompting of large language models for machine translation[J]. arXiv:2302.07856, 2023. [87] LIU J P, HUANG K Y, YU H, et al. Continual learning for multilingual neural machine translation via dual importance-based model division[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2023: 12011-12027. [88] CHEN Z H, HANDA H, SHIRAHAMA K. JCSE: contrastive learning of Japanese sentence embeddings and its applications[J]. arXiv:2301.08193, 2023. [89] VU T T, HE X L, PHUNG D, et al. Generalised unsupervised domain adaptation of neural machine translation with cross-lingual data selection[J]. arXiv:2109.04292, 2021. [90] WANG D X, FAN K, CHEN B X, et al. Efficient cluster-based k-nearest-neighbor machine translation[J]. arXiv:2204. 06175, 2022. [91] BEIGI A, TAN Z, MUDIAM N, et al. Model attribution in LLM-generated disinformation: a domain generalization approach with supervised contrastive learning[C]//Proceedings of the 2024 IEEE 11th International Conference on Data Science and Advanced Analytics. Piscataway: IEEE, 2024: 1-10. [92] PEREZ-MARTIN J, GOMEZ-ROBLES J, GUTIéRREZ-FANDI?O A, et al. Cross-lingual search for e-commerce based on query translatability and mixed-domain fine-tuning[C]//Proceedings of the ACM Web Conference 2023. New York: ACM, 2023: 892-898. [93] MICHAIL A, KONSTANTINOU S, CLEMATIDE S. UZH_ CLyp at SemEval-2023 task 9: head-first fine-tuning and ChatGPT data generation for cross-lingual learning in tweet intimacy prediction[J]. arXiv:2303.01194, 2023. [94] SCHMIDT F D, VULI? I, GLAVA? G. Don’t stop fine-tuning: on training regimes for few-shot cross-lingual transfer with multilingual language models[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2022: 10725-10742. [95] AHMED N. A review of existing machine translation app-roaches, their challenges and evaluation metrics[J]. Pakistan Journal of Engineering, Technology & Science, 2023, 11(1): 29-44. [96] FU Y X, SI S J, MAI L Y, et al. FFN: a fine-grained Chinese-English financial domain parallel corpus[C]//Proceedings of the 2024 International Conference on Asian Language Processing. Piscataway: IEEE, 2024: 127-132. [97] BéRARD A, CALAPODESCU I, DYMETMAN M, et al. Machine translation of restaurant reviews: new corpus for dom-ain adaptation and robustness[J]. arXiv:1910.14589, 2019. [98] MILLER K J, VANNI M. Inter-rater agreement measures, and the refinement of metrics in the PLATO MT evaluation paradigm[C]//Proceedings of Machine Translation Summit X: Papers, 2005: 125-132. [99] HARALAMPIEVA V, CAGLAYAN O, SPECIA L. Supervised visual attention for simultaneous multimodal machine translation[J]. Journal of Artificial Intelligence Research, 2022, 74: 1059-1089. [100] ZHENG C M, FENG J H, CAI Y, et al. Rethinking multimodal entity and relation extraction from a translation point of view[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 6810-6824. [101] RAGANATO A, VáZQUEZ R, CREUTZ M, et al. An emp-irical investigation of word alignment supervision for zero-shot multilingual neural machine translation[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2021: 8449-8456. [102] NISHIHARA T, TAMURA A, NINOMIYA T, et al. Supervised visual attention for multimodal neural machine translation[C]//Proceedings of the 28th International Conference on Computational Linguistics. Stroudsburg: ACL, 2020: 4304-4314. [103] YE R, WANG M X, LI L. Cross-modal contrastive learning for speech translation[J]. arXiv:2205.02444, 2022. [104] GAN S W, YIN Y F, JIANG Z W, et al. Contrastive learning for sign language recognition and translation[C]//Proceedings of the 32nd International Joint Conference on Artificial Intelligence. San Francisco: Morgan Kaufmann, 2023: 763-772. [105] ZHOU M Y, CHENG R X, LEE Y J, et al. A visual attention grounding neural model for multimodal machine translation[J]. arXiv:1808.08266, 2018. [106] FUTERAL M, SCHMID C, LAPTEV I, et al. Tackling amb-iguity with images: improved multimodal machine trans-lation and contrastive evaluation[J]. arXiv:2212.10140, 2022. [107] GUPTA D, KHARBANDA S, ZHOU J W, et al. CLIPTrans: transferring visual knowledge with pre-trained models for multimodal machine translation[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 2863-2874. [108] LIN R H, HU H F. MissModal: increasing robustness to missing modality in multimodal sentiment analysis[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1686-1702. [109] CHEN Q, DONG S H, WANG P M. Advanced multimodal sentiment analysis with enhanced contextual fusion and robustness (AMSA-ECFR): symmetry in feature integration and data alignment[J]. Symmetry, 2024, 16(7): 934. [110] HIRASAWA T, KANEKO M, IMANKULOVA A, et al. Pretrained word embedding and language model improve multimodal machine translation: a case study in Multi30K[J]. IEEE Access, 2022, 10: 67653-67668. |
| [1] | 董磊, 吴福居, 史健勇, 潘龙飞. 基于大语言模型的施工安全多模态知识图谱的构建与应用[J]. 计算机工程与应用, 2025, 61(9): 325-333. |
| [2] | 任海玉, 刘建平, 王健, 顾勋勋, 陈曦, 张越, 赵昌顼. 基于大语言模型的智能问答系统研究综述[J]. 计算机工程与应用, 2025, 61(7): 1-24. |
| [3] | 王敬凯, 秦董洪, 白凤波, 李路路, 孔令儒, 徐晨. 语音识别与大语言模型融合技术研究综述[J]. 计算机工程与应用, 2025, 61(6): 53-63. |
| [4] | 陶江垚, 奚雪峰, 盛胜利, 崔志明, 左严. 结构化思维提示增强大语言模型推理能力综述[J]. 计算机工程与应用, 2025, 61(6): 64-83. |
| [5] | 江双五, 张嘉玮, 华连生, 杨菁林. 基于大模型检索增强生成的气象数据库问答模型实现[J]. 计算机工程与应用, 2025, 61(5): 113-121. |
| [6] | 苑中旭, 李理, 何凡, 杨秀, 韩东轩. 融合思维链与知识图谱的中医问答模型[J]. 计算机工程与应用, 2025, 61(4): 158-166. |
| [7] | 李玥, 洪海蓝, 李文林, 杨涛. 大语言模型构建鼻炎医案知识图谱的应用研究[J]. 计算机工程与应用, 2025, 61(4): 167-175. |
| [8] | 许旻辰, 屈丹, 司念文, 彭思思, 刘云鹏. 多维度多智能体分组讨论的虚假新闻检测方法[J]. 计算机工程与应用, 2025, 61(22): 183-195. |
| [9] | 吴璇, 付涛. 检索增强生成技术研究综述[J]. 计算机工程与应用, 2025, 61(20): 19-35. |
| [10] | 句泽东, 程春雷, 叶青, 彭琳, 龚著凡. 中文语法纠错技术的研究进展综述[J]. 计算机工程与应用, 2025, 61(20): 36-53. |
| [11] | 张钰莹, 云静, 刘雪颖, 史晓国. 基于反馈的大语言模型内容与行为对齐方法综述[J]. 计算机工程与应用, 2025, 61(20): 75-104. |
| [12] | 方岢愿, 许珂维. LLMs与ML优势互补:政务回复质量检测及可解释的算法框架[J]. 计算机工程与应用, 2025, 61(16): 146-159. |
| [13] | 韩明, 曹智轩, 王敬涛, 段丽英, 王剑宏. 基于大语言模型的企业碳排放分析与知识问答系统[J]. 计算机工程与应用, 2025, 61(16): 370-382. |
| [14] | 郭茂祖, 张欣欣, 赵玲玲, 张庆宇. 结构地震响应预测大语言模型[J]. 计算机工程与应用, 2025, 61(16): 132-145. |
| [15] | 李晓理, 刘春芳, 耿劭坤. 知识图谱与大语言模型协同共生模式及其教育应用综述[J]. 计算机工程与应用, 2025, 61(15): 1-13. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||