Computer Engineering and Applications ›› 2022, Vol. 58 ›› Issue (19): 242-249.DOI: 10.3778/j.issn.1002-8331.2102-0259

• Graphics and Image Processing • Previous Articles     Next Articles

Memory-Based Transfer Learning for Few-Shot Learning

LIU Bing, YANG Juan, WANG Ronggui, XUE Lixia   

  1. School of Computer and Information, Hefei University of Technology, Hefei 230601, China
  • Online:2022-10-01 Published:2022-10-01



  1. 合肥工业大学 计算机与信息学院,合肥 230601

Abstract: Few-shot learning is an important field in visual recognition which aims to learn new visual concepts with limited data. To address this challenge, some meta-learning methods propose to extract transferable knowledge from auxiliary tasks and apply the knowledge to the target task. In order to transfer knowledge better, a memory-based transfer learning method is proposed. Firstly, weights decomposition strategy is proposed to decompose the weights into the frozen weights and the learnable weights. In transfer learning, by fixing the frozen weights and updating only the learnable weights, the parameters that the model needs to learn can be reduced. Next, an additional memory module is used to store the experience of previous tasks. When learning novel tasks, the experience can be used to initialize the parameter states of the model for better transfer learning. According to experimental results on miniImageNet, tieredImageNet and CUB datasets, compared with state-of-the-art methods, the proposed method achieves competitive or even better performance for few-shot classification.

Key words: few-shot learning, transfer learning, memory module, meta-learning

摘要: 小样本学习是视觉识别中的一个受关注的领域,旨在通过少量的数据来学习新的视觉概念。为了解决小样本问题,一些元学习方法提出从大量辅助任务中学习可迁移的知识并将其应用于目标任务上。为了更好地对知识进行迁移,提出了一种基于记忆的迁移学习方法。提出一种权重分解策略,将部分权重分解为冻结权重与可学习权重,在迁移学习中通过固定冻结权重,仅更新可学习权重的方式来减少模型需要学习的参数。通过一个额外的记忆模块来存储之前任务的经验,在学习新任务时,这些经验被用来初始化模型的参数状态,以此更好地进行迁移学习。通过在miniImageNet、tieredImageNet以及CUB数据集上的实验结果表明,相对于其他先进的方法,该方法在小样本分类任务上取得了具有竞争力甚至是更好的表现。

关键词: 小样本学习, 迁移学习, 记忆模块, 元学习