计算机工程与应用 ›› 2022, Vol. 58 ›› Issue (21): 264-271.DOI: 10.3778/j.issn.1002-8331.2203-0169

• 工程与应用 • 上一篇    下一篇

BP神经网络FPGA实现结构的优化设计

谭会生,徐界铭,张驾祥   

  1. 1.湖南工业大学 轨道交通学院,湖南 株洲 412000 
    2.长沙理工大学 近地空间电磁环境监测与建模湖南省普通高校重点实验室,长沙 410000
  • 出版日期:2022-11-01 发布日期:2022-11-01

Optimal Design of FPGA Implementation Structure for BP Neural Network

TAN Huisheng, XU Jieming, ZHANG Jiaxiang   

  1. 1.College of Railway Transportation, Hunan University of Technology, Zhuzhou, Hunan 412000, China
    2.Hunan Province Higher Education Key Laboratory of Modeling and Monitoring on the Near-Earth Electromagnetic Environments, Changsha University of Science & Technology, Changsha 410000, China
  • Online:2022-11-01 Published:2022-11-01

摘要: 为了实现反向传播(back propagation,BP)神经网络的现场可编程门阵列(field programmable gate array,FPGA)处理速度的提升和资源消耗的降低,提出一种总体设计和关键模块融合优化的BP神经网络的FPGA实现结构。利用定点数据量化和流水线结构,提高系统的处理速度;采用二次方程多段拟合Sigmoid激活函数,降低计算复杂度;通过调整并行转串行模块与激活函数模块的处理顺序,减少了95%的激活函数模块的使用,降低了资源消耗;采用一种网络原始权值读取与更新权值存储交替流水进行的双端口RAM存取方法,以提高数据存取的速度、降低存储资源消耗。经过对硬件优化设计的字符和服装识别实验验证,结果表明,优化后的总逻辑单元使用率为原来的31%。在FPGA中优化结构实现单样本前向传播与反向传播所用时间为24.332?μs,为软件MATLAB实现时间的45.63%,提高了BP神经网络的运算速度。

关键词: BP神经网络, 现场可编程门阵列(FPGA), 硬件实现结构, 流水线, 并行结构

Abstract: In order to improve the FPGA(field programmable gate array) implementation processing speed of BPNN(back propagation neural network) and reduce resource consumption, an FPGA implementation structure of BPNN is proposed, which integrates the overall design and key modules optimization. Data fixed-point quantization and pipeline structure are used to improve the processing speed of the system. Quadratic equation is used to fit the sigmoid activation function in multiple segments to reduce the computational complexity. By adjusting the processing order of the parallel-to-serial module and the activation function module, the use of the activation function module is reduced by 95%, and the resource consumption is reduced. Using a dual-port RAM access method with alternate pipeline of weight reading and updating, which improves the speed of data access and reduces the consumption of storage resources. Through the character and clothing recognition experiments of the hardware optimized design, the results show that the utilization rate of the total logic elements after optimization is 31% of the original. The optimized structure in FPGA takes 24.332?μs to realize single-sample forward and back propagation, which is 45.63% of the MATLAB implementation time and improves the operation speed of the BPNN.

Key words: BP neural network, field programmable gate array(FPGA), hardware implementation structure, pipeline, parallel structure