计算机工程与应用 ›› 2015, Vol. 51 ›› Issue (8): 32-36.

• 理论研究、研发设计 • 上一篇    下一篇

卷积神经网络的FPGA并行加速方案设计

方  睿,刘加贺,薛志辉,杨广文   

  1. 清华大学 计算机科学与技术系,北京 100084
  • 出版日期:2015-04-15 发布日期:2015-04-29

FPGA-based design for convolution neural network

FANG Rui, LIU Jiahe, XUE Zhihui, YANG Guangwen   

  1. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
  • Online:2015-04-15 Published:2015-04-29

摘要: 根据卷积神经网络的特点,提出了深度流水的FPGA加速方案,设计了卷积层的通用卷积电路。该卷积电路可以在一个时钟周期内获得一个计算结果。理论上,该方案对于MNIST数据集,在28×28个时钟周期内可以获得一幅图片的运算结果。针对网络训练过程的前向传播阶段,在网络结构和数据集相同的情况下,对GPU,FPGA,CPU进行了在计算效率和能耗之间的比较。其中在计算效率方面,50 MHz频率的FPGA就可以相较于GPU实现近5倍的加速,相较于12核的CPU实现8倍的加速。而在功耗方面,该FPGA的实现方案只有GPU版本的26.7%。

关键词: 卷积神经网络, 现场可编程门阵列(FPGA), 深度流水, 加速

Abstract: According to the characteristics of the Convolution Neural Network(CNN), a FPGA-based acceleration program which uses deep-pipeline architecture is proposed for the MNIST data set. In this program, theoretically 28×28 clock cycles can finish the whole calculation and get the output of the CNN. For the propagation stage of the training process, and in the same network structure and the same data set, this FPGA program with 50 MHz frequency can achieve nearly five times speedup compared to GPU version(Caffe), achieve eight times speedup compared to 12 CPU cores. While the FPGA program just costs 26.7% power which GPU version costs.

Key words: convolution neural network, Field Programmable Gate Array(FPGA), deep-pipeline, acceleration