Computer Engineering and Applications ›› 2023, Vol. 59 ›› Issue (20): 94-102.DOI: 10.3778/j.issn.1002-8331.2206-0371

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

CIRBlock:Lightweight Inverted Residuals Module with Cheap Convolution

YU Haikun, LYU Zhigang, WANG Peng, LI Xiaoyan, WANG Hongxi, LI Liangliang   

  1. 1.School of Electronic Information Engineering, Xi’an Technological University, Xi’an 710021, China
    2.School of Mechatronic Engineering, Xi’an Technological University, Xi’an 710021, China
    3.Development Planning Service, Xi’an Technological University, Xi’an 710021, China
  • Online:2023-10-15 Published:2023-10-15

CIRBlock:融合低代价卷积的轻量反向残差模块

余海坤,吕志刚,王鹏,李晓艳,王洪喜,李亮亮   

  1. 1.西安工业大学 电子信息工程学院,西安 710021
    2.西安工业大学 机电工程学院,西安 710021
    3.西安工业大学 发展规划处,西安 710021

Abstract: As the inverted residuals block adopted by the lightweight convolutional neural network MobileNet still has more redundant calculation, a lightweight inverted residuals module(cheap inverted residuals block, CIRBlock) is constructed and a new lightweight convolutional neural network CIRNet is designed. Firstly, the cheap convolution operation is used to simplify the pointwise convolution, and the bypass branch is constructed to perform feature multiplexing to reduce the output channel of the inverted residuals block. Then the channel attention mechanism and channel shuffling are used to enhance the information exchange between channels. Next, in the down-sampling module, the bypass branch’s information is used to construct the same topology structure as the main branch, and the channel diversity of the feature redundant structure is improved. Finally, the design of the lightweight network module CIRBlock is completed, and the lightweight convolutional neural network CIRNet of different complexity is constructed by manually stacking CIRBlock. Experiments show that based on the same VGG16 architecture on the CIFAR dataset, the FLOPs of the CIRBlock is 58.1% lower than the inverted residuals block using MobileNetV2, the parameter amount is reduced by 55.5%, and the classification accuracy loss is less than 0.4%. On the Mini-ImageNet dataset, the classification accuracy of CIRNet is 0.35% higher than that of MobileNetV2, FLOPs are reduced by 69%, and the amount of parameter is reduced by 77.4%.

Key words: machine vision, lightweight convolutional neural network, inverted residuals block, target classification

摘要: 针对轻量级卷积神经网络MobileNet采用的反向残差结构仍具有较多的冗余计算的问题,构建了一种更为轻量的反向残差模块(cheap inverted residuals block,CIRBlock),并设计了一种新的轻量级卷积神经网络CIRNet。通过低代价卷积操作,简化逐点卷积,并构建旁路分支进行特征复用,减少反向残差的输出通道;利用通道注意力机制和通道混洗,增强通道间信息交流;在下采样时利用旁路分支信息构建和主分支相同的拓扑结构,提高特征冗余结构的通道多样性;完成轻量化网络模块CIRBlock的设计,并通过人工堆叠CIRBlock构建不同复杂度的轻量级卷积神经网络CIRNet。在目标分类上的实验表明:在CIFAR数据集上,基于相同的VGG16架构,使用CIRBlock比使用MobileNetV2的反向残差结构FLOPs降低58.1%,参数量减少55.5%,分类精度损失小于0.4%。在Mini-ImageNet目标分类数据集上,CIRNet分类精度比MobileNetV2高0.35%,FLOPs降低69%,参数量减少77.4%。

关键词: 机器视觉, 轻量级卷积神经网络, 反向残差结构, 目标分类