计算机工程与应用 ›› 2019, Vol. 55 ›› Issue (14): 24-31.DOI: 10.3778/j.issn.1002-8331.1903-0430

• 热点与综述 • 上一篇    下一篇

最小二乘迁移生成对抗网络

王孝顺,陈  丹,丘海斌   

  1. 福州大学 电气工程与自动化学院,福州 350116
  • 出版日期:2019-07-15 发布日期:2019-07-11

Least Squares Transfer Generative Adversarial Networks

WANG Xiaoshun, CHEN Dan, QIU Haibin   

  1. School of Electrical Engineering and Automation, Fuzhou University, Fuzhou 350116, China
  • Online:2019-07-15 Published:2019-07-11

摘要: 现有的生成对抗网络(Generative Adversarial Networks,GAN)损失函数已经被成功地应用在迁移学习方法中。然而,发现这种损失函数在学习过程中可能会出现梯度消失的问题。为了克服该问题,提出了一种学习领域不变特征的新方法,即最小二乘迁移生成对抗网络(Least Squares Transfer Generative Adversarial Networks,LSTGAN)。LSTGAN采用最小二乘生成对抗网络(Least Squares Generative Adversarial Networks,LSGAN)损失函数,通过单领域判别的训练方式来减少领域分布之间的差异。通过研究表明,所提方法与其他有竞争力的算法相比较具有一定的优越性。

关键词: 生成对抗网络, 迁移学习, 梯度消失, 领域不变特征, 最小二乘生成对抗网络损失函数

Abstract: The existing Generative Adversarial Networks(GAN) loss function has been successfully used in transfer learning method. However, it is found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome this problem, a new learning domain invariant feature approach, Least Squares Transfer Generative Adversarial Networks(LSTGAN), is proposed. LSTGAN adopts least squares generative adversarial networks loss function to reduce the discrepancy of domain distribution by a single-domain discrimination training way. The research shows that the proposed method has certain advantages compared with other competitive algorithms.

Key words: generative adversarial networks, transfer learning, vanishing gradients, domain invariant feature, least squares generative adversarial networks loss function