计算机工程与应用 ›› 2020, Vol. 56 ›› Issue (24): 66-71.DOI: 10.3778/j.issn.1002-8331.2003-0035

• 理论与研发 • 上一篇    下一篇

可实现时分复用的CNN卷积层和池化层IP核设计

张卫,刘宇红,张荣芬   

  1. 贵州大学 大数据与信息工程学院,贵阳 550025
  • 出版日期:2020-12-15 发布日期:2020-12-15

Design of IP Cores for CNN Convolutional Layer and Pooling Layer Capable of Time Division Multiplexing

ZHANG Wei, LIU Yuhong, ZHANG Rongfen   

  1. College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, China
  • Online:2020-12-15 Published:2020-12-15

摘要:

近年来,对于神经网络算法的实现,越来越多人选择使用现场可编程逻辑门阵列(Field Programmable Gate Array,FPGA),而其当前实现的方式主要以Verilog硬件描述语言(Verilog Hardware Description Language,Verilog HDL)和高层综合语言(High Level Synthesis,HLS)为主。HLS具有易于理解与使用、开发时间短等特点,故采用HLS来设计卷积神经网络(Convolutional Neural Network,CNN)中的卷积层和池化层,生成IP核后,进一步利用时分复用技术搭建整个系统。实验采用MNIST手写数字数据集进行验证,将10层卷积神经网络布署到Xilinx公司的ZYNQ-7000 xc7z010clg400-1FPGA芯片上,经10 000次迭代后的平均识别准确率为95.34%。该IP核的设计对于快速使用FPGA来实现神经网络进行图像处理具有重要的意义。

关键词: 卷积神经网络(CNN), 现场可编程逻辑门阵列(FPGA), 高层综合语言, IP核, 时分复用

Abstract:

In recent years, more and more people choose Field Programmable Gate Array(FPGA) for the implementation of neural network algorithms. The current implementation is mainly based on Verilog Hardware Description Language(Verilog HDL) and High Level Synthesis(HLS). HLS is easy to understand and use, and its development time is fast. Therefore, this paper uses HLS to design the convolutional layer and pooling layer in the CNN neural network. After generating the IP core, the entire system can be built using time division multiplexing technology. In the experiment, the MNIST data set is used for verification. When the CNN neural network with 10-layers is deployed on the FPGA, the average accuracy is 95.34% after 10,000 iterations. The design of the IP core is of great significance for implementing neural networks with FPGA in short time.

Key words: Convolutional Neural Network(CNN), Field Programmable Gate Array(FPGA), high-level integrated language, IP core, time-division multiplexing