[1] 张磊, 姜志博, 王海涛, 等. 输电通道移动巡视管控系统构建[J]. 中国电力, 2020, 53(3): 35-42.
ZHANG L, JIANG Z B, WANG H T, et al. Construction of mobile patrol management and control system for power transmission channel[J]. Electric Power, 2020, 53(3): 35-42.
[2] 陈思雨, 付章杰. 融合高效注意力的多尺度输电线路部件检测[J]. 计算机工程与应用, 2024, 60(1): 327-336.
CHEN S Y, FU Z J. Multi-scale transmission line component detection incorporating efficient attention[J]. Computer Engineering and Applications, 2024, 60(1): 327-336.
[3] LI H, LIU L Z, DU J, et al. An improved YOLOv3 for foreign objects detection of transmission lines[J]. IEEE Access, 2022, 10: 45620-45628.
[4] YU C H, LIU Y K, ZHANG W R, et al. Foreign objects identification of transmission line based on improved YOLOv7[J]. IEEE Access, 2023, 11: 51997-52008.
[5] WANG Z Y, YUAN G W, ZHOU H, et al. Foreign-object detection in high-voltage transmission line based on improved YOLOv8m[J]. Applied Sciences, 2023, 13(23): 12775.
[6] WU M H, GUO L M, CHEN R, et al. Improved YOLOX foreign object detection algorithm for transmission lines[J]. Wireless Communications and Mobile Computing, 2022, 2022: 5835693.
[7] SU J Y, SU Y K, ZHANG Y, et al. EpNet: power lines foreign object detection with edge proposal network and data composition[J]. Knowledge-Based Systems, 2022, 249: 108857.
[8] WU Y Y, ZHAO S F, XING Z Z, et al. Detection of foreign objects intrusion into transmission lines using diverse generation model[J]. IEEE Transactions on Power Delivery, 2023, 38(5): 3551-3560.
[9] 谭志国, 欧建平, 张军, 等. 多特征组合的深度图像分割算法[J]. 计算机工程与科学, 2018, 40(8): 1429-1434.
TAN Z G, OU J P, ZHANG J, et al. Multi-feature combined depth image segmentation algorithm[J]. Computer Engineering & Science, 2018, 40(8): 1429-1434.
[10] LI H, LI Z, WU T, et al. Powerline detection and accurate localization method based on the depth image[C]//Proceedings of the 16th International Conference on Intelligent Robotics and Applications. Singapore: Springer Nature Singapore, 2023: 317-328.
[11] MAO T Q, HUANG K, ZENG X W, et al. Development of power transmission line defects diagnosis system for UAV inspection based on binocular depth imaging technology[C]//Proceedings of the 2nd International Conference on Electrical Materials and Power Equipment. Piscataway: IEEE, 2019: 478-481.
[12] MING Y, MENG X Y, FAN C X, et al. Deep learning for monocular depth estimation: a review[J]. Neurocomputing, 2021, 438: 14-33.
[13] GODARD C, AODHA M O, FIRMAN M, et al. Digging into self-supervised monocular depth estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 3827-3837.
[14] XUE F, CAO J F, ZHOU Y, et al. Boundary-induced and scene-aggregated network for monocular depth prediction[J]. Pattern Recognition, 2021, 115: 107901.
[15] FAROOQ BHAT S, ALHASHIM I, WONKA P. AdaBins: depth estimation using adaptive bins[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 4008-4017.
[16] LYU X Y, LIU L, WANG M M, et al. HR-depth: high resolution self-supervised monocular depth estimation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 2294-2301.
[17] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv:2010.11929, 2020.
[18] RANFTL R, BOCHKOVSKIY A, KOLTUN V. Vision Transformers for dense prediction[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 12159-12168.
[19] KIM D, KA W, AHN P, et al. Global-local path networks for monocular depth estimation with vertical cutdepth[J]. arXiv:2201.07436, 2022.
[20] SHAH S, DEY D, LOVETT C, et al. AirSim: high-fidelity visual and physical simulation for autonomous vehicles[J]. arXiv:1705.05065, 2017.
[21] GE Z, LIU S, WANG F, et al. YOLOx: exceeding YOLO series in 2021[J]. arXiv:2107.08430, 2021.
[22] CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 833-851.
[23] HOU Q B, ZHOU D Q, FENG J S. Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 13708-13717.
[24] GUO J Y, HAN K, WU H, et al. CMT: convolutional neural networks meet vision transformers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 12165-12175.
[25] ZHANG N, NEX F, VOSSELMAN G, et al. Lite-mono: a lightweight CNN and transformer architecture for self-supervised monocular depth estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 18537-18546.
[26] LI Z Y, CHEN Z H, LIU X M, et al. DepthFormer: exploiting long-range correlation and local information for accurate monocular depth estimation[J]. Machine Intelligence Research, 2023, 20(6): 837-854.
[27] ZHU X K, LYU S C, WANG X, et al. TPH-YOLOv5: improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. Piscataway: IEEE, 2021: 2778-2788.
[28] CHEN S F, SUN P Z, SONG Y B, et al. DiffusionDet: diffusion model for object detection[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 19773-19786.
[29] ZHAO Y A, LYU W Y, XU S L, et al. DETRs beat YOLOs on real-time object detection[C]//Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 16965-16974. |