
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (16): 38-63.DOI: 10.3778/j.issn.1002-8331.2410-0211
秦董洪,李政韬,白凤波,董路宽,张慧,徐晨
出版日期:2025-08-15
发布日期:2025-08-15
QIN Donghong, LI Zhengtao, BAI Fengbo, DONG Lukuan, ZHANG Hui, XU Chen
Online:2025-08-15
Published:2025-08-15
摘要: 近年来,自然语言处理领域的训练范式和模型规模发生显著变化,从特定任务的监督学习转向全量微调大规模预训练模型。然而,模型参数的激增导致全量微调计算成本高昂。“参数高效微调”技术应运而生,通过仅微调部分参数或引入少量新参数,显著降低成本并保持性能。对近年来参数高效微调技术中最具代表性和最前沿的方法进行了简要介绍和系统分析,涵盖设计理念与核心算法,并对不同方法的特性、优势、不足以及适用场景进行了归纳和分析,并进一步对比了不同种类中同系列的多种方法,分析了同系列方法在设计理念上的演进趋势,提供了当前研究现状的全面概述。最后对参数高效微调技术进行整体的分析与展望,提出未来该技术可能的优化方向,并结合实践提出该技术在实际工程应用中可行的技术方案。
秦董洪, 李政韬, 白凤波, 董路宽, 张慧, 徐晨. 大语言模型参数高效微调技术综述[J]. 计算机工程与应用, 2025, 61(16): 38-63.
QIN Donghong, LI Zhengtao, BAI Fengbo, DONG Lukuan, ZHANG Hui, XU Chen. Review of Parameter-Efficient Fine-Tuning Technology for Large Language Models[J]. Computer Engineering and Applications, 2025, 61(16): 38-63.
| [1] LIU P F, YUAN W Z, FU J L, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35. [2] KOTSIANTIS S B, ZAHARAKIS I, PINTELAS P. Supervised machine learning: a review of classification techniques[J]. Emerging Artificial Intelligence Applications in Computer Engineering, 2007, 160(1): 3-24. [3] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]//Advances in Neural Information Processing Systems 26, 2013: 3111-3119. [4] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[J]. OpenAI, 2018. [5] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186. [6] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems 33, 2020: 1877-1901. [7] MUENNIGHOFF N, WANG T, SUTAWIKA L, et al. Crosslingual generalization through multitask finetuning[J]. arXiv:2211.01786, 2022. [8] ZENG A H, LIU X, DU Z X, et al. GLM-130B: an open bilingual pre-trained model[J]. arXiv:2210.02414, 2022. [9] WEI J, WANG X Z, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]//Advances in Neural Information Processing Systems 35, 2022: 24824-24837. [10] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[J]. arXiv:2302.13971, 2023. [11] LIALIN V, DESHPANDE V, YAO X W, et al. Scaling down to scale up: a guide to parameter-efficient fine-tuning[J]. arXiv:2303.15647, 2023. [12] 张钦彤, 王昱超, 王鹤羲, 等. 大语言模型微调技术的研究综述[J]. 计算机工程与应用, 2024, 60(17): 17-33. ZHANG Q T, WANG Y C, WANG H X, et al. Comprehensive review of large language model fine-tuning[J]. Computer Engineering and Applications, 2024, 60(17): 17-33. [13] CHOUDHARY T, MISHRA V, GOSWAMI A, et al. A comprehensive survey on model compression and acceleration[J]. Artificial Intelligence Review, 2020, 53(7): 5113-5155. [14] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017. [15] NAGEL M, FOURNARAKIS M, AMJAD R A, et al. A white paper on neural network quantization[J]. arXiv:2106. 08295, 2021. [16] DAO T, FU D, ERMON S, et al. FlashAttention: fast and memory-efficient exact attention with io-awareness[C]//Advances in Neural Information Processing Systems 35, 2022: 16344-16359. [17] SHAZEER N. GLU variants improve transformer[J]. arXiv:2002.05202, 2020. [18] SU J L, AHMED M, LU Y, et al. RoFormer: enhanced transformer with rotary position embedding[J]. Neurocomputing, 2024, 568: 127063. [19] BANDY J, VINCENT N. Addressing “documentation debt” in machine learning research: a retrospective datasheet for BookCorpus[J]. arXiv:2105.05241, 2021. [20] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 9. [21] GHOLAMI A, KIM S, DONG Z, et al. A survey of quantization methods for efficient neural network inference[M]//Low-power computer vision. Boca Raton: Chapman and Hall/CRC, 2022: 291-326. [22] KAHAN W. IEEE standard 754 for binary floating-point arithmetic[J]. Lecture Notes on the Status of IEEE, 1996, 754: 11. [23] TEICH P. Tearing apart Google??s TPU 3.0 AI coprocessor[EB/OL]. (2018-05-10)[2024-08-11]. https://www.nextplatform.com/2018/05/10/tearing-apart-googles-tpu-3-0-ai-coprocessor/. [24] WANG S, PANKAJ K. BFloat16: the secret to high performance on cloud TPUs[EB/OL]. (2019-08-23)[2024-08-11]. https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus. [25] MICIKEVICIUS P, STOSIC D, BURGESS N, et al. FP8 formats for deep learning[J]. arXiv:2209.05433, 2022. [26] DUSAN S, PAULIUS M. Accelerating AI training with NVIDIA TF32 tensor cores[EB/OL]. (2021-01-27)[2024-08-20]. https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/. [27] PARESH K. TensorFloat-32 in the A100 GPU accelerates AI training, HPC up to 20x[EB/OL]. (2020-05-14)[2024-08-22]. https://blogs.nvidia.com/blog/tensorfloat-32-precision-format/. [28] WU H, JUDD P, ZHANG X J, et al. Integer quantization for deep learning inference: principles and empirical evaluation[J]. arXiv:2004.09602, 2020. [29] DAVE S, HAO W. Int4 precision for AI inference[EB/OL]. (2019-11-06)[2024-08-22]. https://developer.nvidia.com/blog/int4-for-ai-inference/. [30] DING N, QIN Y J, YANG G, et al. Delta tuning: a comprehensive study of parameter efficient methods for pre-trained language models[J]. arXiv:2203.06904, 2022. [31] WANG A, SINGH A, MICHAEL J, et al. GLUE: a multi-task benchmark and analysis platform for natural language understanding[J]. arXiv:1804.07461, 2018. [32] WANG A, PRUKSACHATKUN Y, NANGIA N, et al. SuperGLUE: a stickier benchmark for general-purpose language understanding systems[C]//Advances in Neural Information Processing Systems 32, 2019. [33] NARAYAN S, COHEN S B, LAPATA M. Don??t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization[J]. arXiv:1808. 08745, 2018. [34] NOVIKOVA J, DU?EK O, RIESER V. The E2E dataset: new challenges for end-to-end generation[J]. arXiv:1706. 09254, 2017. [35] GARDENT C, SHIMORINA A, NARAYAN S, et al. Creating training corpora for NLG micro-planners[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2017: 179-188. [36] NAN L Y, RADEV D, ZHANG R, et al. DART: open-domain structured data record to text generation[J]. arXiv:2007. 02871, 2020. [37] HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter-efficient transfer learning for NLP[C]//Proceedings of the 36th International Conference on Machine Learning, 2019: 2790-2799 [38] LI X L, LIANG P. Prefix-tuning: optimizing continuous prompts for generation[J]. arXiv:2101.00190, 2021. [39] LESTER B, AL-RFOU R, CONSTANT N, et al. The power of scale for parameter-efficient prompt tuning[J]. arXiv: 2104.08691, 2021. [40] LIU H K, TAM D, MUQEETH M, et al. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning[C]//Advances in Neural Information Processing Systems 35, 2022: 1950-1965. [41] CAO J, PRAKASH C S, HAMZA W. Attention fusion: a light yet efficient late fusion mechanism for task adaptation in NLU[C]//Findings of the Association for Computational Linguistics: NAACL 2022. Stroudsburg: ACL, 2022: 857-866. [42] LIU X, ZHENG Y, DU Z, et al. GPT understands, too[J]. arXiv:2103.10385, 2021. [43] LIU X, JI K X, FU Y C, et al. P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[J]. arXiv:2110.07602, 2021. [44] SUNG Y L, CHO J, BANSAL M. LST: ladder side-tuning for parameter and memory efficient transfer learning[C]//Advances in Neural Information Processing Systems, 2022, 35: 12991-13005. [45] HE J X, ZHOU C T, MA X Z, et al. Towards a unified view of parameter-efficient transfer learning[J]. arXiv:2110.04366, 2021. [46] WANG Z, PANDA R, KARLINSKY L, et al. Multitask prompt tuning enables parameter-efficient transfer learning[J]. arXiv:2303.02861, 2023. [47] PFEIFFER J, KAMATH A, RüCKLé A, et al. Adapter-Fusion: non-destructive task composition for transfer learning[J]. arXiv:2005.00247, 2020. [48] LEWIS M, LIU Y H, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[J]. arXiv:1910.13461, 2019. [49] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21: 5485-5551. [50] GAO T Y, FISCH A, CHEN D Q. Making pre-trained language models better few-shot learners[J]. arXiv:2012.15723, 2020. [51] HANSEN L K, SALAMON P. Neural network ensembles[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 993-1001. [52] LAKSHMINARAYANAN B, PRITZEL A, BLUNDELL C. Simple and scalable predictive uncertainty estimation using deep ensembles[C]//Advances in Neural Information Processing Systems 30, 2017. [53] QIN G H, EISNER J. Learning how to ask: querying LMs with mixtures of soft prompts[J]. arXiv:2104.06599, 2021. [54] LIU Y H, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019. [55] HE P C, LIU X D, GAO J F, et al. DeBERTa: decoding-enhanced BERT with disentangled attention[J]. arXiv:2006. 03654, 2020. [56] DU Z X, QIAN Y J, LIU X, et al. GLM: general language model pretraining with autoregressive blank infilling[J]. arXiv:2103.10360, 2021. [57] SANH V, WEBSON A, RAFFEL C, et al. Multitask prompted training enables zero-shot task generalization[J]. arXiv:2110. 08207, 2021. [58] SU Y S, WANG X Z, QIN Y J, et al. On transferability of prompt tuning for natural language processing[J]. arXiv:2111.06719, 2021. [59] ZHONG Q H, DING L, LIU J H, et al. PANDA: prompt transfer meets knowledge distillation for efficient model adaptation[J]. arXiv:2208.10160, 2022. [60] VU T, LESTER B, CONSTANT N, et al. SPoT: better frozen model adaptation through soft prompt transfer[J]. arXiv:2110.07904, 2021. [61] ASAI A, SALEHI M, PETERS M E, et al. Attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing[J]. arXiv:2205.11961, 2022. [62] GOU J P, YU B S, MAYBANK S J, et al. Knowledge distillation: a survey[J]. International Journal of Computer Vision, 2021, 129(6): 1789-1819. [63] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[J]. arXiv:1503.02531, 2015. [64] FISCH A, TALMOR A, JIA R, et al. MRQA 2019 shared task: evaluating generalization in reading comprehension[J]. arXiv:1910.09753, 2019. [65] GUO D M, RUSH A M, KIM Y. Parameter-efficient transfer learning with diff pruning[J]. arXiv:2012.07463, 2020. [66] BEN ZAKEN E, RAVFOGEL S, GOLDBERG Y. BitFit: simple parameter-efficient fine-tuning for transformer-based masked language-models[J]. arXiv:2106.10199, 2021. [67] SUNG Y L, NAIR V, RAFFEL C A. Training neural networks with fixed sparse masks[C]//Advances in Neural Information Processing Systems 34, 2021: 24193-24205 [68] ANSELL A, PONTI E M, KORHONEN A, et al. Composable sparse fine-tuning for cross-lingual transfer[J]. arXiv:2110.07560, 2021. [69] VUCETIC D, TAYARANIAN M, ZIAEEFARD M, et al. Efficient fine-tuning of BERT models on the edge[C]//Proceedings of the 2022 IEEE International Symposium on Circuits and Systems. Piscataway: IEEE, 2022: 1838-1842. [70] GHEINI M, REN X, MAY J. Cross-attention is all you need: adapting pretrained transformers for machine translation[J]. arXiv:2104.08771, 2021. [71] VALIZADEHASLANI T, LIANG H L. LayerNorm: a key component in parameter-efficient fine-tuning[J]. arXiv:2403. 20284, 2024. [72] FISHER R A. On the mathematical foundations of theoretical statistics[J]. Philosophical Transactions of the Royal Society of London Series A, Containing Papers of a Mathematical or Physical Character, 1922, 222: 309-368. [73] KOVALEVA O, KULSHRESHTHA S, ROGERS A, et al. BERT busters: outlier dimensions that disrupt transformers[J]. arXiv:2105.06990, 2021. [74] ZHAO B C, TU H Q, WEI C, et al. Tuning LayerNorm in attention: towards efficient multi-modal LLM finetuning[J]. arXiv:2312.11420, 2023. [75] HU E J, SHEN Y, WALLIS P, et al. LoRA: low-rank adaptation of large language models[J]. arXiv:2106.09685, 2021. [76] VALIPOUR M, REZAGHOLIZADEH M, KOBYZEV I, et al. DyLoRA: parameter efficient tuning of pre-trained models using dynamic search-free low-rank adaptation[J]. arXiv:2210.07558, 2022. [77] EDALATI A, TAHAEI M, KOBYZEV I, et al. KronA: parameter efficient tuning with Kronecker adapter[J]. arXiv:2212.10650, 2022. [78] ZHANG Q R, CHEN M S, BUKHARIN A, et al. AdaLoRA: adaptive budget allocation for parameter-efficient fine-tuning[J]. arXiv:2303.10512, 2023. [79] DETTMERS T, PAGNONI A, HOLTZMAN A, et al. QLoRA: efficient finetuning of quantized LLMs[J]. arXiv:2305.14314, 2023. [80] LI Y X, YU Y F, LIANG C, et al. LoftQ: LoRA-fine-tuning-aware quantization for large language models[J]. arXiv:2310. 08659, 2023. [81] HYEON-WOO N, YE-BIN M, OH T H. FedPara: low-rank Hadamard product for communication-efficient federated learning[J]. arXiv:2108.06098, 2021. [82] KOPICZKO D J, BLANKEVOORT T, ASANO Y M. VeRA: vector-based random matrix adaptation[J]. arXiv:2310.11454, 2023. [83] LIU S Y, WANG C Y, YIN H X, et al. DoRA: weight-decomposed low-rank adaptation[J]. arXiv:2402.09353, 2024. [84] AGHAJANYAN A, ZETTLEMOYER L, GUPTA S. Intrinsic dimensionality explains the effectiveness of language model fine-tuning[J]. arXiv:2012.13255, 2020. [85] YANG G, HU E J. Feature learning in infinite-width neural networks[J]. arXiv:2011.14522, 2020. [86] TAHAEI M S, CHARLAIX E, NIA V P, et al. KroneckerBERT: learning kronecker decomposition for pre-trained language models via knowledge distillation[J]. arXiv:2109. 06243, 2021. [87] HAMEED M G A, TAHAEI M S, MOSLEH A, et al. Convolutional neural network compression through generalized Kronecker product decomposition[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(1): 771-779. [88] ZHANG Q R, ZUO S M, LIANG C, et al. PLATON: pruning large transformer models with upper confidence bound of weight importance[C]//Proceedings of the 39th International Conference on Machine Learning, 2022: 26809-26823. [89] HE P C, GAO J F, CHEN W Z. DeBERTaV3: improving DeBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing[J]. arXiv:2111.09543, 2021. [90] DETTMERS T, LEWIS M, SHLEIFER S, et al. 8-bit optimizers via block-wise quantization[J]. arXiv:2110.02861, 2021. [91] TOUVRON H, MARTIN L, STONE K, et al. Llama 2: open foundation and fine-tuned chat models[J]. arXiv:2307.09288, 2023. [92] CHO J, LEI J, TAN H, et al. Unifying vision-and-language tasks via text generation[C]//Proceedings of the 38th International Conference on Machine Learning, 2021: 1931-1942. [93] LIU H, LI C, WU Q, et al. Visual instruction tuning[C]//Advances in Neural Information Processing Systems 37, 2024. [94] MAO Y N, MATHIAS L, HOU R, et al. UniPELT: a unified framework for parameter-efficient language model tuning[J]. arXiv:2110.07577, 2021. [95] CHEN J A, ZHANG A, SHI X J, et al. Parameter-efficient fine-tuning design spaces[J]. arXiv:2301.01821, 2023. [96] MAHABADI R K, HENDERSON J, RUDER S. Compacter: efficient low-rank hyper complex adapter layers[C]//Advances in Neural Information Processing Systems 34, 2021: 1022-1035. [97] GU A, GULCEHRE C, PAINE T, et al. Improving the gating mechanism of recurrent neural networks[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 3800-3809. [98] ZHANG A, TAY Y, ZHANG S, et al. Beyond fully-connected layers with quaternions: parameterization of hyper complex multiplications with 1/n parameters[J]. arXiv:2102.08597, 2021. [99] JAWAHAR G, SAGOT B, SEDDAH D. What does BERT learn about the structure of language?[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 3651-3657. [100] ZHAO W X, ZHOU K, LI J, et al. A survey of large language models[J]. arXiv:2303.18223, 2023. [101] LUO Y, YANG Z, MENG F D, et al. An empirical study of catastrophic forgetting in large language models during continual fine-tuning[J]. arXiv:2308.08747, 2023. [102] HUANG L, YU W J, MA W T, et al. A survey on hallucination in large language models: principles, taxonomy, challenges, and open questions[J]. arXiv:2311.05232, 2023. [103] PENG B, ALCAIDE E, ANTHONY Q, et al. RWKV: reinventing RNNs for the transformer era[J]. arXiv:2305. 13048, 2023. [104] SUN Y T, DONG L, HUANG S H, et al. Retentive network: a successor to transformer for large language models[J]. arXiv:2307.08621, 2023. [105] GU A, DAO T. Mamba: linear-time sequence modeling with selective state spaces[J]. arXiv:2312.00752, 2023. [106] TANG P W, HU X L, LIU Y. ADePT: adaptive decomposed prompt tuning for parameter-efficient fine-tuning[J]. arXiv:2501.03291, 2025. [107] KANG J L. Bone: block affine transformation as parameter efficient fine-tuning methods for large language models[J]. arXiv:2409.15371, 2024. [108] LIN C, LI L J, LI D Z, et al. NoRA: nested low-rank adaptation for efficient fine-tuning large models[J]. arXiv:2408. 10280, 2024. [109] LIU Y L, MA Y P, CHEN S, et al. PERFT: parameter-efficient routed fine-tuning for mixture-of-expert model[J]. arXiv:2411.08212, 2024. [110] GALIM K, KANG W, ZENG Y C, et al. Parameter-efficient fine-tuning of state space models[J]. arXiv:2410.09016, 2024. [111] LIU J, LOWY A, KOIKE-AKINO T, et al. Efficient differentially private fine-tuning of diffusion models[J]. arXiv: 2406.05257, 2024. [112] SANH V, DEBUT L, CHAUMOND J, et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter[J]. arXiv:1910.01108, 2019. [113] JACOBS R A, JORDAN M I, NOWLAN S J, et al. Adaptive mixtures of local experts[J]. Neural Computation, 1991, 3(1): 79-87. [114] SHAZEER N, MIRHOSEINI A, MAZIARZ K, et al. Outrageously large neural networks: the sparsely-gated mixture-of-experts layer[J]. arXiv:1701.06538, 2017. [115] XUE F Z, SHI Z J, WEI F T, et al. Go wider instead of deeper[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(8): 8779-8787. |
| [1] | 李淑慧, 蔡伟, 王鑫, 高蔚洁, 狄星雨. 深度学习框架下的红外与可见光图像融合方法综述[J]. 计算机工程与应用, 2025, 61(9): 25-40. |
| [2] | 陈浞, 刘东青, 唐平华, 黄燕, 张文霞, 贾岩, 程海峰. 面向目标检测的物理对抗攻击研究进展[J]. 计算机工程与应用, 2025, 61(9): 80-101. |
| [3] | 史张龙, 周喜, 王震, 马博, 杨雅婷. 多任务增强的文本生成式事件要素抽取方法[J]. 计算机工程与应用, 2025, 61(9): 168-176. |
| [4] | 王婧, 李云霞. NS-FEDformer模型对股票收益率的预测研究[J]. 计算机工程与应用, 2025, 61(9): 334-342. |
| [5] | 李仝伟, 仇大伟, 刘静, 逯英航. 基于RGB与骨骼数据的人体行为识别综述[J]. 计算机工程与应用, 2025, 61(8): 62-82. |
| [6] | 温浩, 杨洋. 融合ERNIE与知识增强的临床短文本分类研究[J]. 计算机工程与应用, 2025, 61(8): 108-116. |
| [7] | 王燕, 卢鹏屹, 他雪. 结合特征融合注意力的规范化卷积图像去雾网络[J]. 计算机工程与应用, 2025, 61(8): 226-238. |
| [8] | 周佳妮, 刘春雨, 刘家鹏. 融合通道与多头注意力的股价趋势预测模型[J]. 计算机工程与应用, 2025, 61(8): 324-338. |
| [9] | 甄彤, 张威振, 李智慧. 遥感影像中种植作物结构分类方法综述[J]. 计算机工程与应用, 2025, 61(8): 35-48. |
| [10] | 韩佰轩, 彭月平, 郝鹤翔, 叶泽聪. DMU-YOLO:机载视觉的多类异常行为检测算法[J]. 计算机工程与应用, 2025, 61(7): 128-140. |
| [11] | 史昕, 王浩泽, 纪艺, 马峻岩. 融合时空特征的多模态车辆轨迹预测方法[J]. 计算机工程与应用, 2025, 61(7): 325-333. |
| [12] | 任海玉, 刘建平, 王健, 顾勋勋, 陈曦, 张越, 赵昌顼. 基于大语言模型的智能问答系统研究综述[J]. 计算机工程与应用, 2025, 61(7): 1-24. |
| [13] | 邢素霞, 李珂娴, 方俊泽, 郭正, 赵士杭. 深度学习下的医学图像分割综述[J]. 计算机工程与应用, 2025, 61(7): 25-41. |
| [14] | 陈宇, 权冀川. 伪装目标检测:发展与挑战[J]. 计算机工程与应用, 2025, 61(7): 42-60. |
| [15] | 翟慧英, 郝汉, 李均利, 占志峰. 铁路设施无人机自主巡检算法研究综述[J]. 计算机工程与应用, 2025, 61(7): 61-80. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||