Computer Engineering and Applications ›› 2021, Vol. 57 ›› Issue (12): 216-223.DOI: 10.3778/j.issn.1002-8331.2009-0175

Previous Articles     Next Articles

Improved YOLOv3 Helmet Wearing Detection Method

XIAO Tigang, CAI Lecai, GAO Xiang, HUANG Hongbin, ZHANG Chaoyang   

  1. 1.School of Automation and Information Engineering, Sichuan University of Science & Engineering, Zigong, Sichuan 643000, China
    2.Sanjiang Artificial Intelligence and Robot Research Institute, Yibin University, Yibin, Sichuan 644000, China
  • Online:2021-06-15 Published:2021-06-10

改进YOLOv3的安全帽佩戴检测方法

肖体刚,蔡乐才,高祥,黄洪斌,张超阳   

  1. 1.四川轻化工大学 自动化与信息工程学院,四川 自贡 643000
    2.宜宾学院 三江人工智能与机器人研究院,四川 宜宾 644000

Abstract:

Aiming at the problem of low accuracy and slow detection rate of helmet wearing detection in intelligent monitoring, a helmet wearing detection algorithm YOLOv3-WH based on improved YOLOv3(You Only Look Once) is proposed. The network structure on the basis of the YOLOv3 algorithm is improved, the scale of the input image is increased. Using deep separable convolution structure can replace the Darknet-53 traditional convolution, reduce the loss of features, model parameters, and increase the detection rate. Using multi-scale feature detection can increase the shallow detection scale, add 4 times the up-sampling feature fusion structure and improve the accuracy of helmet wearing detection. Optimizing the [K]-Means clustering algorithm can obtain the anchor box of the helmet wearing detection, according to the predicted scale size allocate suitable anchors, the model training and detection rate are improved. Experimental results show that compared with YOLOv3, YOLOv3-WH has a 64% increase in frame detection per second(FPS), and an average detection accuracy(mAP) has increased by 6.5%. The algorithm improves the detection rate of helmet wearing while improving detection. The accuracy rate is of certain practicality for the detection of helmet wearing.

Key words: helmet wear detection, YOLOv3, deep separable convolution, feature fusion, [K]-Means

摘要:

针对在智能监控中安全帽佩戴检测准确率低和检测速率慢的问题,提出一种基于改进YOLOv3(You Only Look Once)的安全帽佩戴检测算法YOLOv3-WH。在YOLOv3算法的基础上改进网络结构,增大输入图像的尺度,使用深度可分离卷积结构替换Darknet-53传统卷积,减少特征的丢失,缩减模型参数,提升检测速率;使用多尺度特征检测,增加浅层检测尺度,添加4倍上采样特征融合结构,提高安全帽佩戴检测准确率;优化[K]-Means聚类算法,获取安全帽佩戴检测的先验框(anchor box),按照预测尺度大小分配适合的anchor,提升模型训练和检测速率。实验结果表明YOLOv3-WH相比YOLOv3,每秒检测帧数(FPS)提高了64%,检测平均精确度(mAP)提高了6.5%,该算法在提升了安全帽佩戴检测速率的同时提升了检测的准确率,对安全帽佩戴检测具有一定的实用性。

关键词: 安全帽检测, YOLOv3, 深度可分离卷积, 特征融合, [K]-Means