Computer Engineering and Applications ›› 2009, Vol. 45 ›› Issue (12): 50-51.DOI: 10.3778/j.issn.1002-8331.2009.12.016

• 研究、探讨 • Previous Articles     Next Articles

Joint-optimization algorithm of BP neural network

SUN Wei-wei,LIU Qiong-sun   

  1. College of Science,Chongqing University,Chongqing 400044,China
  • Received:2008-03-13 Revised:2008-06-11 Online:2009-04-21 Published:2009-04-21
  • Contact: SUN Wei-wei

BP神经网络的联合优化算法

孙娓娓,刘琼荪   

  1. 重庆大学 数理学院,重庆 400044
  • 通讯作者: 孙娓娓

Abstract: In view of the BP neural network existence convergence rate slow and easy to fall into local minimum,this paper presents an adaptive learning rate adjustment and dynamic adjustment S-type activation function combination of improved BP algorithm.The proposed algorithm connects the learning rate with the error function and the slope of the activation function of each hidden and output unit is automatically adjusted.Finally through the example simulation,the validity of the proposed method is verified compared to the standard BP algorithm,the momentum method and adaptive learning rate method.The experimental results show that the joint optimization of BP algorithm can effectively speed up the network convergence process and has strong generalization ability.

Key words: back-propagation algorithm, learning rate, error function, activation function

摘要: 针对BP神经网络存在收敛速度慢、易陷入局部极小等缺陷,提出了一种自适应调节学习率和动态调整S型激励函数相结合的改进BP算法。该算法将学习率与误差函数相关联,再对每个隐单元和输出单元的激励函数的斜率进行自动调整。通过实例仿真,将改进算法与标准BP算法、加动量项法和自适应学习率法进行比较,来验证所提出方法的有效性。实验结果表明,联合优化的BP算法能有效加快网络的收敛过程,并具有较强的泛化能力。

关键词: BP算法, 学习率, 误差函数, 激励函数