[1] REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once:unified,real-time object detection[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition,2016:779-788.
[2] REDMON J,FARHADI A.YOLO9000:better,faster,stronger[J].arXiv:1612.08242,2016.
[3] REDMON J,FARHADI A.YOLOv3:an incremental improvement[J].arXiv:1804.02767,2018.
[4] BOCHKOVSKIY A,WANG C Y,LIAO H Y M.YOLOv4:optimal speed and accuracy of object detection[J].arXiv:2004.10934,2020.
[5] 李志刚,张娜.一种轻量型YOLOv5交通标志识别方法[J].电讯技术,2022,62(9):1201-1206.
LI Z G,ZHANG N.A light-weight YOLOv5 traffic sign recongnition method[J].Telecommunication Engineering,2022,62(9):1201-1206.
[6] MA N,ZHANG X,ZHENG H T,et al.Shufflenet v2:practical guidelines for efficient cnn architecture design[C]//Proceedings of the European Conference on Computer Vision(ECCV),2018:116-131
[7] ZHANG G,LI W J,ZHANG Y X.Traffic sign recognition based on the improved YOLOv5 algorithm[C]//Proceedings of the 2021 National Conference on Simulation Technology,2021:182-185.
[8] 李宇琼,周永军,蒋淑霞,等.基于注意力机制的交通标志识别[J].电子测量技术,2022,45(8):116-120.
LI Y Q,ZHOU Y J,JIANG S X,et al.Traffic sign recognition based on attention mechanism[J].Electronic Measurement Technology,2022,45(8):116-120.
[9] 王靖逸,刘树惠.基于改进YOLOv4的交通标志识别方法[J].电子设计工程,2022,30(18):184-188.
WANG J Y,LIU S H.Traffic sign recognition method based on improved YOLOv4[J].Electronic Design Engineering,2022,30(18):184-188.
[10] 党宏社,党晨,张选德.基于改进YOLOv5s的交通标志识别算法[J].实验技术与管理,2022,39(9):97-102.
DANG H S,DANG C,ZHANG X D.Traffic sign recognition algorithm based on improved YOLOv5s[J].Experimental Technology and Management,2022,39(9):97-102.
[11] 李松,亚森江·木沙.改进YOLOv7的X射线图像违禁品实时检测[J].计算机工程与应用,2023,59(12):193-200.
LI S,MUSA Y S J.Improved YOLOv7 X-ray image real-time detection of prohibited items[J].Computer Engineering and Applications,2023,59(12):193-200.
[12] 尹宋麟,谭飞,周晴,等.基于改进YOLOv4模型的交通标志检测[J].无线电工程,2022,52(11):2087-2093.
YI S L,TAN F,ZHOU Q,et al.Traffic sign detection based on improved YOLOv4 model[J].Radio Engineering,2022,52(11):2087-2093.
[13] 陈德海,孙仕儒,王昱朝,等.一种改进YOLOv3的交通标志识别算法[J].河南科技大学学报(自然科学版),2022,43(6):31-36.
CHEN D H,SUN S R,WANG Y C,et al.An improved YOLOv3 traffic sign recognition algorithm[J].Journal of Henan University of Science and Technology(Natural Science),2022,43(6):31-36.
[14] 秦强强,廖俊国,周弋荀.基于多分支混合注意力的小目标检测算法[J/OL].计算机应用:1-9(2023-03-16)[2023-03-24].http://kns.cnki.net/kcms/detail/51.1307.TP.20230316.
1610.012.html.
QING Q Q,LIAO J G,ZHOU D X.Small object detection algorithm based on multi-branching hybrid attention[J/OL].Journal of Computer Applications:1-9(2023-03-16)[2023-03-24].http://kns.cnki.net/kcms/detail/51.1307.
TP.20230316.1610.012.html.
[15] WANG J W,CHEN Y,GAO M Y,et al.Improved YOLOv5 network for real-time multi-scale traffic sign detection[J].arXiv:2112.08782,2021.
[16] 朱开,陈慈发.基于YOLOv5的雾霾天气下交通标志识别[J/OL].电子测量技术:1-8(2023-02-21)[2023-03-24].http://kns.cnki.net/kcms/detail/11.2175.TN.20230221.1729.
010.html.
ZHU K,CHEN C F.Traffic sign recognition in hazy weather based on YOLOv5[J/OL].Electronic Measurement Techniques:1-8(2023-02-21)[2023-03-24].http://kns.cnki.net/kcms/detail/11.2175.TN.20230221.1729.010.html.
[17] 郎斌柯,吕斌,吴建清,等.基于CA-BIFPN的交通标志检测模型[J/OL].深圳大学学报(理工版):1-9(2023-02-23)[2023-03-24].http://kns.cnki.net/kcms/detail/44.1401.N.20230223.1413.004.html.
LANG B K,LV B,WU J Q,et al.Traffic sign detection model based on CA-BIFPN[J/OL].Journal of Shenzhen University(Science and Technology Edition):1-9(2023-02-23)[2023-03-24].http://kns.cnki.net/kcms/detail/44.1401.N.20230223.1413.004.html.
[18] 胡均平,王鸿树,戴小标,等.改进YOLOv5的小目标交通标志实时检测算法[J].计算机工程与应用,2023,59(2):185-193.
HU J P,WANG H S,DAI X B,et al.Real-time detection algorithm for small-target traffic signs based on improved YOLOv5[J].Computer Engineering and Applications,2023,59(2):185-193.
[19] 徐正军,张强,许亮.一种基于改进YOLOv5s-Ghost网络的交通标志识别方法[J].光电子·激光,2023,34(1):52-61.
XU Z J,ZHANG Q,XU L.A traffic sign identificatio method based on an improved YOLOv5s-Ghost network[J].Journal of Photoelectronics Laser,2023,34(1):52-61.
[20] WANG C Y,BOCHKOVSKIY A,et al.YOLOv7:trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[J].arXiv:2207.02696,2022.
[21] HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition,2016:770-778.
[22] HAH SAR,WU W J,LU Q M,et al.AmoebaNet:an SDN-enabled network service for big data science[J].Journal of Network and Computer Applications,2018,119:70-82.
[23] HOWARD A G,ZHU M,CHEN B,et al.Mobilenets:efficient convolutional neural networks for mobile vision applications[J].arXiv:1704.04861,2017.
[24] ZHANG X Y,ZHOU X Y,LIN M X,et al.ShuffleNet:an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:6848-6856.
[25] IANDOLA F N,HAN S,MOSKEWICZ M W,et al.SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5?MB modedl size[J].arXiv:1602. 07360,2016.
[26] CHOLLEF F.Xception:deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:1800-1807.
[27] SIMONYAN K Z A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[28] LIN T Y,DOLLáR P,GIRSHICK R,et al.Feature pyramid networks for object detection[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition,2017:2117-2125
[29] LIU S,QI L,QIN H,et al.Path aggregation network for instance segmentation[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition,2018:8759-8768.
[30] MAHENDRAN A,VEDALDI A.Understanding deep image representations by inverting them[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition,2015:5188-5196.
[31] HENDRYCKS D,GIMPEL K.Bridging nonlinearities and stochastic regularizers with gaussian error linear units[J].arXiv:1606.08415,2016.
[32] ZHANG J,HUANG M,JIN X,et al.A real-time Chinese traffic sign detection algorithm based on modified YOLOv2[J].Algorithms,2017,10(4):127.