[1] 谢世勇. 基于深度学习的电力着装检测系统的设计与实现[D]. 武汉: 华中科技大学, 2023.
XIE S Y. Design and implementation of electric clothing detection system based on deep learning[D]. Wuhan: Huazhong University of Science and Technology, 2023.
[2] 张伍康, 潘立志, 郭志彬, 等. 电力场景下基于RetinaNet的绝缘手套异常状态视觉检测方法[J]. 湖南科技大学学报 (自然科学版), 2022, 37(1): 85-91.
ZHANG W K, PAN L Z, GUO Z B, et al. Visual detection method of abnormal state of insulating gloves based on RetinaNet in power scenarios[J]. Journal of Hunan University of Science and Technology (Natural Science Edition), 2022, 37(1): 85-91.
[3] WEN C Y. The safety helmet detection technology and its application to the surveillance system[J]. Journal of Forensic Sciences, 2004, 49(4): 770-780.
[4] 刘晓慧, 叶西宁. 肤色检测和Hu矩在安全帽识别中的应用[J]. 华东理工大学学报 (自然科学版), 2014, 40(3): 365-370.
LIU X H, YE X N. Skin color detection and Hu moments in helmet recognition research[J]. Journal of East China University of Science and Technology (Natural Science Edition), 2014, 40(3): 365-370.
[5] 陈健. 基于表现特征的人体着装分析与识别[D]. 北京: 北京邮电大学, 2010.
CHEN J. Human dress analysis and recognition based on role appearance[D]. Beijing: Beijing University of Posts and Telecommunications, 2010.
[6] 严杰峰. 带电作业用绝缘手套适用范围及安全问题[J]. 中国高新科技, 2020(13): 147-148.
YAN J F. Scope of application and safety issues of insulating gloves for live working[J]. China High and New Technology, 2020(13): 147-148.
[7] LONG X T, CUI W P, ZHENG Z, Safety helmet wearing detection based on deep learning[C]//IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 2019: 2495-2499.
[8] SHEN J, XIONG X, LI Y, et al. Detecting safety helmet wearing on construction sites with bounding-box regression and deep transfer learning[J]. Computer-Aided Civil and Infrastructure Engineering, 2020, 36(2): 180-196.
[9] 刘思佳. 基于图像分割的工作人员着装检测系统设计与实现[D]. 武汉: 华中科技大学, 2022.
LIU S J. Design and implementation of staff dressing detection system based on image segmentation[D]. Wuhan: Huazhong University of Science and Technology, 2022.
[10] 刘欣宜, 张宝峰, 符烨, 等. 基于深度学习的污染场地作业人员着装规范性检测[J]. 中国安全生产科学技术, 2020, 16(7): 169-175.
LIU X Y, ZHANG B F, FU Y, et al. Detection on normalization of operating personnel dressing at contaminated sites based on deep learning[J]. Journal of Safety Science and Technology, 2020, 16(7): 169-175.
[11] 何国立, 齐冬莲, 闫云凤. 一种基于关键点检测和注意力机制的违规着装识别算法及其应用[J]. 中国电机工程学报, 2022, 42(5): 1826-1837.
HE G L, QI D L, YAN Y F. An illegal dress recognition algorithm based on key-point detection and attention mechanism and its application[J]. Proceedings of the CSEE, 2022, 42(5): 1826-1837.
[12] ZHAO B N, LAN H J, NIU Z W, et al. Detection and location of personal safety protective equipment and workers in power substations using a wear-enhanced YOLOv3 algorithm[J]. IEEE Access, 2021: 3104731.
[13] 伏德粟, 高林, 刘威, 等. 基于改进YOLOv5算法的电力工人作业安全关键装备检测[J]. 湖北民族大学学报 (自然科学版), 2022, 40(3): 320-327.
FU D L, GAO L, LIU W, et al. Detection of key safety equipment for electric workers based on improved YOLOv5 algorithm[J]. Journal of Hubei Minzu University (Natural Science Edition), 2022, 40(3): 320-327.
[14] WADEKARS N, CHAURASIA A. MobileViTv3: mobile-friendly vision transformer with simple and effective fusion of local, global and input features[J]. arXiv:2209.15159, 2022.
[15] TONG Z J, CHEN Y H, XU Z Wei, et al. Wise-IoU: bounding box regression loss with dynamic focusing mechanism[J]. arXiv:2301.10051, 2023.
[16] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition , 2013: 580-587.
[17] GIRSHICK R. Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015: 1440-1448.
[18] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal net-works[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149.
[19] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2015: 779-788.
[20] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]//IEEE Conference on Computer Vision & Pattern Recognition, 2017: 6517-6525.
[21] BOCHKOVSKIY A, WANG C Y, LIAO H. YOLOv4: optimal speed and accuracy of object detection[J]. arXiv:2004.
10934, 2020.
[22] GE Z, LIU S, WANG F, et al. YOLOX: exceeding YOLO series in 2021[J]. arXiv:2107.08430, 2021.
[23] LI C Y, LI L L, JIANG H L, et al. YOLOv6: a single-stage object detection framework for industrial applications[J]. arXiv:2209.02976, 2022.
[24] XU S, WANG X, LV W, et al. PP-YOLOE: an evolved version of YOLO[J]. arXiv:2203.16250, 2022.
[25] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]//European Conference on Computer Vision, 2015.
[26] VASWANI A, SHAZEER N M, PARMAR N, et al. Attention is all you need[J]. arXiv:1706.03762, 2017.
[27] NICOLAS C, FRANCISCO M, GABRIEL S, et al. End-to-end object detection with transformers[J]. arXiv:2005.12872, 2020.
[28] DOSOVITSKIT A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: transformers for image recognition at scale[J]. arXiv:2010.11929, 2020.
[29] LIU Z, LIN Y, CAO Y, et al. Swin transformer: hierarchical vision transformer using shifted windows[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021: 9992-10002.
[30] MEHTA S, RASTEGARI M. MobileViT: light-weight, general-purpose, and mobile-friendly vision transformer[J]. arXiv:2110.02178, 2021.
[31] ZHENG Z H, WANG P, LIU W, et al. Distance-IoU loss: faster and better learning for bounding box regression[J]. arXiv:1911.08287, 2019.
[32] MEHTA S, RASTEGARI M. Separable self-attention for mobile vision transformers[J]. arXiv:2206.02680, 2022.
[33] REZATOFIGHI H, TSOI N, GWAK J, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2019: 658-666.
[34] GEVORGYAN Z. SIoU loss: more powerful learning for bounding box regression[J]. arXiv:2205.12740, 2022.
[35] ZHANG Y F, REN W, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. arXiv:2101.08158, 2021. |