Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (23): 1-27.DOI: 10.3778/j.issn.1002-8331.2407-0436

• Research Hotspots and Reviews • Previous Articles     Next Articles

Survey on Prompt Learning

CUI Jinman, LI Dongmei, TIAN Xuan, MENG Xianghao, YANG Yu, CUI Xiaohui   

  1. 1.School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China
    2.Engineering Research Center for Forestry-Oriented Intelligent Information Processing of National Forestry and Grassland Administration, Beijing 100083, China
  • Online:2024-12-01 Published:2024-11-29

提示学习研究综述

崔金满,李冬梅,田萱,孟湘皓,杨宇,崔晓晖   

  1. 1.北京林业大学 信息学院,北京 100083
    2.国家林业草原林业智能信息处理工程技术研究中心,北京 100083

Abstract: Fine-tuned pre-trained language models have achieved remarkable performance in various domain tasks. However, there is a significant gap in training data and objective function between pre-training and fine-tuning, which limits the effective adaptation of pre-trained language models to downstream tasks. Prompt learning has been proposed to bridge the gap in pre-training and fine-tuning, and can be well applied to few-shot or even zero-shot scenarios. The core idea of prompt learning is to wrap the original input with prompt template to convert the downstream task data into the form of natural language, and input it into the pre-trained models to output the prediction result, and then map the output to corresponding labels through the language verbalizer. This paper systematically combs the current approaches of prompt learning, and introduces its research progress from two stages of prompt template and language verbalizer construction according to the implementation steps of prompt learning. The prompt template based methods are subdivided into manually constructed, automatic constructed, introducing external knowledge to constructing prompt and thought prompting methods. The language verbalizer based methods are subdivided into manual verbalizer, search-based verbalizer, soft verbalizer and verbalizer with external knowledge introduced. In the following, the paper summarizes the main applications of prompt learning in the fields of natural language processing, computer vision and multimodal, and analyzes the related experiments of prompt learning. Finally, this paper summarizes the current situation and challenges in prompt learning, and prospects the future technological development of prompt learning.

Key words: prompt learning, pre-trained models, pre-training and fine-tuning, few-shot learning, zero-shot learning

摘要: 经过微调的预训练语言模型在各领域任务中均取得了显著的性能。但是,预训练和微调之间在训练数据和目标函数方面存在着巨大差距,阻碍了预训练语言模型对下游任务的有效适应。提示学习的提出缩小了预训练和微调之间的差距,并可以很好地应用到小样本甚至零样本场景中。提示学习的核心思想是将提示模板插入到原始输入中,将下游任务数据转化为自然语言的形式输入到预训练模型中,输出预测结果,然后通过语言表达器将输出映射到相应的标签。系统地梳理了当前提示学习的相关工作,根据提示学习的实现步骤,从提示模板和语言表达器构建两个阶段介绍该类方法的研究进展。将基于提示模板的方法细分为人工构建、自动构建、引入外部知识构建提示和思维提示方法4种;将基于语言表达器的方法细分为人工构建的表达器、基于搜索的表达器、软表达器和引入外部知识构建表达器的方法4种。总结了提示学习在自然语言处理、计算机视觉和多模态领域的主要应用,并对提示学习相关实验进行了分析。最后,概述了提示学习的现状和挑战,展望了提示学习的未来发展方向。

关键词: 提示学习, 预训练模型, 预训练和微调, 小样本学习, 零样本学习