计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (16): 38-63.DOI: 10.3778/j.issn.1002-8331.2410-0211

• 热点与综述 • 上一篇    下一篇

大语言模型参数高效微调技术综述

秦董洪,李政韬,白凤波,董路宽,张慧,徐晨   

  1. 广西民族大学 人工智能学院,南宁 530006
  • 出版日期:2025-08-15 发布日期:2025-08-15

Review of Parameter-Efficient Fine-Tuning Technology for Large Language Models

QIN Donghong, LI Zhengtao, BAI Fengbo, DONG Lukuan, ZHANG Hui, XU Chen   

  1. College of Artificial Intelligence, Guangxi Minzu University, Nanning 530006, China
  • Online:2025-08-15 Published:2025-08-15

摘要: 近年来,自然语言处理领域的训练范式和模型规模发生显著变化,从特定任务的监督学习转向全量微调大规模预训练模型。然而,模型参数的激增导致全量微调计算成本高昂。“参数高效微调”技术应运而生,通过仅微调部分参数或引入少量新参数,显著降低成本并保持性能。对近年来参数高效微调技术中最具代表性和最前沿的方法进行了简要介绍和系统分析,涵盖设计理念与核心算法,并对不同方法的特性、优势、不足以及适用场景进行了归纳和分析,并进一步对比了不同种类中同系列的多种方法,分析了同系列方法在设计理念上的演进趋势,提供了当前研究现状的全面概述。最后对参数高效微调技术进行整体的分析与展望,提出未来该技术可能的优化方向,并结合实践提出该技术在实际工程应用中可行的技术方案。

关键词: 参数高效微调技术, 深度学习, 自然语言处理, 模型优化

Abstract: In recent years, significant changes have occurred in the training paradigms and model scales in the field of natural language processing. The approach has shifted from the task-specific supervised learning to the full fine-tuning of large-scale pre-trained models. However, the soaring number of model parameters has led to prohibitively high computational costs for full fine-tuning. To address this, “parameter-efficient fine-tuning” (PEFT) techniques have emerged. These methods significantly reduce costs while maintaining performance by fine-tuning only a subset of parameters or introducing a small number of new parameters. This paper provides a brief introduction and systematic analysis of the most representative and cutting-edge PEFT methods developed in recent years. It covers their design philosophies and core algorithms, summarizes and analyzes the characteristics, strengths, weaknesses, and applicable scenarios of different methods, and further compares various approaches within the same category to identify evolutionary trends in their design concepts. This offers a comprehensive overview of the current research landscape. Finally, this paper presents an overall analysis and outlook on PEFT technology, proposes potential directions for future optimization, and offers practical technical solutions for its application in real-world engineering contexts.

Key words: parameter-efficient fine-tuning, deep learning, natural language processing, model optimization