计算机工程与应用 ›› 2021, Vol. 57 ›› Issue (9): 154-161.DOI: 10.3778/j.issn.1002-8331.2002-0008

• 模式识别与人工智能 • 上一篇    下一篇

基于联合分布的多标记迁移学习

桑江徽,姜海燕   

  1. 南京农业大学 信息科技学院,南京 210095
  • 出版日期:2021-05-01 发布日期:2021-04-29

Multi-label Transfer Learning Algorithm Based on Joint Distribution Alignment

SANG Jianghui, JIANG Haiyan   

  1. College of Information Science and Technology, Nanjing Agricultural University, Nanjing 210095, China
  • Online:2021-05-01 Published:2021-04-29

摘要:

针对现有的多标记迁移学习忽略条件分布而导致泛化能力不足的问题,设计了一种基于联合分布的多标记迁移学习(Multi-label Transfer Learning via Joint Distribution Alignment,J-MLTL)。分解原始特征生成特征子空间,在子空间中计算条件分布的权重系数,最小化跨领域数据的边际分布和条件分布差异;此外,为了防止标记内部结构信息损失,利用超图对具有多个相同标签的数据进行连接,保持领域内几何流行结构不受领域外知识结构的影响,进一步最小化领域间的分布差异。实验结果表明,相比于已有多标记迁移学习算法在分类精度方面具有显著提升。

关键词: 多标记数据, 迁移学习, 子空间学习, 联合分布

Abstract:

To improve the poor generalization of the existing multi-labels transfer learning, a Multi-label Transfer Learning via Joint Distribution Alignment(J-MLTL) is proposed. J-MLTL decomposes the original features to generate a feature subspace, and calculates the weight coefficients of the conditional distribution in the subspace. Finally, J-MLTL narrows the differences between domains by adjusting the marginal and conditional distribution of the data. In order to prevent the loss of internal information in the labels, J-MLTL uses hypergraph to connects data with multiple identical labels. Hypergraph keeps geometric popular structures in the domain from being affected by knowledge structures outside the domain, and further minimizes distribution differences between domains to ensure good results. The experimental results show that compared with the existing multi-label transfer learning algorithms, the classification accuracy is significantly improved.

Key words: multi-label data, transfer learning, subspace learning, joint distribution