[1] 路浩, 陈原. 基于机器视觉的碳纤维预浸料表面缺陷检测方法[J]. 纺织学报, 2020, 41(4): 51-57.
LU H, CHEN Y. Surface defect detection method of carbon fiber prepreg based on machine vision[J]. Journal of Textile Research, 2020, 41(4): 51-57.
[2] TIAN H, WANG D, LIN J, et al. Surface defects detection of stamping and grinding flat parts based on machine vision[J]. Sensors, 2020, 20(16): 4531-4536.
[3] 吕文涛, 林琪琪, 钟佳莹, 等. 面向织物疵点检测的图像处理技术研究进展[J]. 纺织学报, 2021, 42(11): 197-206.
LYU W T, LIN Q Q, ZHONG J Y, et al. Research progress on image processing techniques for fabric defect detection[J]. Journal of Textile Research, 2021, 42(11): 197-206.
[4] ZHAO X, LI W, ZHANG Y, et al. A faster RCNN-based pedestrian detection system[C]//Proceedings of the IEEE 84th Vehicular Technology Conference, 2016: 1-5.
[5] ZKAO W, HUANG H, LI D, et al. Pointer defect detection based on transfer learning and improved cascade-RCNN[J]. Sensors, 2020, 20(17): 4939.
[6] CARION N, MASSA F, SYNNAEVE G, et al. End to-end object detection with transformers[C]//Proceedings of the 16th European Conference on Computer Vision, 2020: 213-229.
[7] ZHAI S, SHANG D, WANG S, et al. DF-SSD: an improved SSD object detection algorithm based on DenseNet and feature fusion[J]. IEEE Access, 2020, 8: 24344-24357.
[8] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 2980-2988.
[9] TAN M, PANG R, LE Q V. EfficientDet: scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 10781-10790.
[10] DUAN K, BAI S, XIE L, et al. CenterNet: keypoint triplets for object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 6569-6578.
[11] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[J]. arXiv:2004.10934, 2020.
[12] LI C, LI L, JIANG H, et al. YOLOv6: a single-stage object detection framework for industrial applications[J]. arXiv:2209.02976, 2022.
[13] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[J]. arXiv:2207.02696, 2022.
[14] 景军锋, 范晓婷, 李鹏飞, 等. 应用深度卷积神经网络的色织物缺陷检测[J]. 纺织学报, 2017, 38(2): 68-74.
JING J F, FAN X T, LI P F, et al. Defect detection of woven fabrics using deep convolutional neural networks[J]. Journal of Textile Research, 2017, 38(2): 68-74.
[15] JING J, ZHUO D, ZHANG H, et al. Fabric defect detection using the improved YOLOv3 model[J]. Journal of Engineered Fibers and Fabrics, 2020, 15(4): 68-75.
[16] ZHU X, LYU S, WANG X, et al. TPH-YOLOv5: improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 2778-2788.
[17] ZHENG L, WANG X, WAN Q, et al. A fabric defect detection method based on improved YOLOv5[C]//Proceedings of the International Conference on Computer and Communications, 2021: 620-624.
[18] LIU Q, WANG C, LI Y, et al. A fabric defect detection method based on deep learning[J]. IEEE Access, 2022, 10: 4284-4296.
[19] 胡越杰, 蒋高明. 基于 YOLOv5-DCN 的织物疵点检测[J]. 棉纺织技术, 2023, 51(3): 8-14.
HU Y J, JIANG G M. Fabric defect detection based on YOLOv5-DCN[J]. Cotton Textile Technology, 2023, 51(3): 8-14.
[20] LI H, LI J, WEI H, et al. Slim-neck by GSConv: a better design paradigm of detector architectures for autonomous vehicles[J]. arXiv:2206.02424, 2022.
[21] GEVORGYAN Z. SIoU loss: more powerful learning for bounding box regression[J]. arXiv:2205.12740, 2022.
[22] GHIASI G, LIN T Y, LE Q V. NAS-FPN: learning scalable feature pyramid architecture for object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 7036-7045.
[23] ZHOU L, RAO X, LI Y, et al. A lightweight object detection method in aerial images based on dense feature fusion path aggregation network[J]. ISPRS International Journal of Geo-Information, 2022, 11(3): 189.
[24] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision, 2018: 3-19.
[25] REZATOFIGHI H, TSOI N, GWAK J Y, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 658-666.
[26] ZHENG Z, WANG P, LIU W, et al. Distance-IoU loss: faster and better learning for bounding box regression[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12993-13000.
[27] ZHANG Y F, REN W, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506: 146-157.
[28] HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 13713-13722.
[29] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 7132-7141.
[30] LIU Y, SHAO Z, HOFFMANN N. Global attention mechanism: retain information to enhance channel-spatial interactions[J]. arXiv:2112.05561, 2021. |