[1] 朱力强, 许力之, 赵文钰, 等. 铁路周界入侵目标多尺度特征感知算法[J]. 中国铁道科学, 2024, 45(1): 215-226.
ZHU L Q, XU L Z, ZHAO W Y, et al. Multi-scale feature perception algorithm for railway perimeter intrusion targets[J]. China Railway Science, 2024, 45(1): 215-226.
[2] 傅荟瑾. 基于深度学习的高铁周界入侵监测方法研究[D]. 北京: 中国铁道科学研究院, 2023.
FU H J. Research on high-speed rail perimeter intrusion monitoring methods based on deep learning[D]. Beijing: China Academy of Railway Sciences, 2023.
[3]王前选, 梁习锋, 刘应龙, 等. 缓变异物入侵铁路线路视觉检测方法[J]. 中国铁道科学, 2014, 35(3): 137-143.
WANG Q X, LIANG X F, LIU Y L, et al. Visual detection method for the invasion of slowly changing foreign matters to railway lines[J]. China Railway Science, 2014, 35(3): 137-143.
[4] 董宏辉, 孙智源, 葛大伟, 等. 基于高斯混合模型的铁路入侵物体目标识别方法[J]. 中国铁道科学, 2011, 32(2):131-135.
DONG H H, SUN Z Y, GE D W, et al. Target recognition method of railway invasion based on Gaussian mixture model[J]. China Railway Science, 2011, 32 (2):131-135.
[5]史红梅, 柴华, 王尧, 等. 基于目标识别与跟踪的嵌入式铁路异物侵限检测算法研究[J]. 铁道学报, 2015, 37(7):58-65.
SHI H M, CHAI H, WANG Y, et al. Study on railway embedded detection algorithm for railway intrusion based on object recognition and tracking[J]. Journal of the China Railway Society, 2015, 37(7): 58-65.
[6] 史天运, 侯博, 李国华, 等. 基于改进DINO的铁路接触网异物检测方法[J]. 中国铁道科学, 2024, 45(4):158-167.
SHI T Y, HOU B, LI G H, et al. Foreign object detection method for railway contact network based on improved DINO[J]. China Railway Science, 2024, 45(4): 158-167.
[7] 王辉, 吴雨杰, 范自柱, 等. 基于深度学习的铁路限界快速检测算法[J]. 铁道科学与工程学报, 2023, 20(4): 1223-1231.
WANG H, WU Y J, FAN Z Z, et al. A rapid detection algorithm for railway gauge based on deep learning[J]. Journal of Railway Science and Engineering, 2023, 20(4): 1223-1231.
[8]WANG L F, WAN H, TANG X L, et al. Recurrent attention convolutional neural network optimise track foreign body detection[J]. IET Communications, 2023, 17(1): 1-11.
[9] 叶涛, 赵宗扬, 郑志康. 基于LAM-Net的轨道侵入界异物自主检测系统[J]. 仪器仪表学报, 2023, 43(9): 206-218.
YE T, ZHAO Z Y, ZHENG Z K. Autonomous detection system for foreign objects in orbit intrusion boundaries based on LAM-Net [J]. Journal of Instrumentation, 2023, 43(9): 206-218.
[10] 徐岩, 陶慧青, 虎丽丽. 基于Faster R-CNN网络模型的铁路异物侵限检测算法研究[J]. 铁道学报, 2020, 42(5): 91-98.
XU Y, TAO H Q, HU L L. Railway foreign body intrusion detection based on faster R-CNN network model[J]. Journal of the China Railway Society, 2020, 42(5): 91-98.
[11] 王辉, 姜朱丰, 吴雨杰, 等. 基于深度学习的铁路异物侵限快速检测方法[J]. 铁道科学与工程学报, 2024, 21(5): 2086-2098.
WANG H, JIANG Z F, WU Y J, et al. A rapid detection method for railway foreign object intrusion based on deep learning [J]. Journal of Railway Science and Engineering, 2024, 21(5): 2086-2098.
[12] YE T, ZHANG J, ZHAO Z Y, et al. Foreign body detection in rail transit based on a multi-mode feature-enhanced convolutional neural network[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(10): 18051-18063.
[13] WU F, GAO J L, HONG L Q, et al. G-NAS: generalizable neural architecture search for single domain generalization object detection[C]//Proceedings of the 38th AAAI Conference on Artificial Intelligence, 2024: 5958-5966.
[14] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2980-2988.
[15] LYU C Q, ZHANG W W, HUANG H A, et al. RTMDet: an empirical study of designing real-time object detectors[J]. arXiv:2212.07784, 2022.
[16] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 779-788.
[17] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 7263-7271.
[18] REDMON J, FARHADI A. YOLOv3: an incremental improvement[J]. arXiv:1804.02767, 2018.
[19] WANG C Y, LIAO H Y, WU Y H, et al. CSPNet: a new backbone that can enhance learning capability of CNN[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 390-391.
[20] LI C Y, LI L L, GENG Y F, et al. YOLOv6 v3.0: a full-scale reloading[J]. arXiv:2301.05586, 2023.
[21] WANG C Y, BOCHKOVSKIY A, LIAO H Y. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 7464-7475.
[22] WANG C Y, YEH I, LIAO H Y. YOLOv9: learning what you want to learn using programmable gradient information[J]. arXiv:2402.13616, 2024.
[23] WANG A, CHEN H, LIU L H, et al. YOLOv10: real-time end-to-end object detection[J]. arXiv:2405.14458, 2024.
[24] ZHANG J, LI X T, LI J, et al. Rethinking mobile block for efficient attention-based models[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE, 2023: 1389-1400.
[25] OUYANG D L, HE S, ZHANG G Z, et al. Efficient multi-scale attention module with cross-spatial learning[C]//Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2023: 1-5.
[26] SANDLER M, HOWARD A, ZHU M L, et al. MobileNetv2: inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4510-4520.
[27] ZHENG Z H, WANG P, LIU W, et al. Distance-IoU loss: faster and better learning for bounding box regression[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence. Piscataway: IEEE, 2020: 12993-13000.
[28] YU Z C, LIU Q L, WANG W, et al. DALNet: a rail detection network based on dynamic anchor line[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1-14.
[29] ZHAO Y, LV W Y, XU S L, et al. Detrs beat YOLOs on real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 16965-16974.
[30] WANG Z Y, LI C, XU H Y, et al. Mamba YOLO: SSMs-based YOLO for object detection[J]. arXiv:2406.05835, 2024.
[31] JIAO J Y, TANG Y M, LIN K Y, et al. DilateFormer: multi-scale dilated Transformer for visual recognition[J]. IEEE Transactions on Multimedia, 2023, 25: 8906-8919.
[32] ZHU L, WANG X J, KE ZH, et al. BiFormer: vision Transformer with bi-level routing attention[C]//Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition. Piscataway: IEEE, 2023: 10323-10333.
[33] HOU Q B, ZHOU D Q, FENG J S. Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition. Piscataway: IEEE, 2021: 13708-13717.
[34] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Conference On Computer Vision. Piscataway: IEEE, 2017: 618-626. |