[1] 夏博光. 基于RFID技术的高速综合检测列车时空校准系统研究[D]. 北京: 中国铁道科学研究院, 2011.
XIA B G. The research of high-speed comprehensive inspection train space-time calibration system based on RFID technology[D]. Beijing: China Academy of Railway Sciences, 2011.
[2] 龚利, 赵延杰, 朱明辉. 一种基于北斗和5G技术融合的复杂环境下机车定位方法[J]. 北京交通大学学报, 2021, 45(2): 44-51.
GONG L, ZHAO Y J, ZHU M H. A fusion method based on Beidou and 5G technology for locomotive positioning in complex environment[J]. Journal of Beijing Jiaotong University, 2021, 45(2): 44-51.
[3] MUR-ARTAL R, MONTIEL J M M, TARDóS J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
[4] KHALIQ S, ANJUM M L, HUSSAIN W, et al. Why ORB-SLAM is missing commonly occurring loop closures?[J]. Autonomous Robots, 2023, 47(8): 1519-1535.
[5] KOESTLER L, YANG N, ZELLER N, et al. TANDEM: tracking and dense mapping in real-time using deep multi-view stereo[C]//Proceedings of the Conference on Robot Learning, 2022: 34-45.
[6] MILDENHALL B, SRINIVASAN P P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106.
[7] 朱沛尧, 周海波, 张浩宇, 等. 移动机器人视觉同步定位与建图方法[J/OL]. 天津理工大学学报: 1-10[2025-01-05]. https://link.cnki.net/urlid/12.1374.N.20240514.1049.004.
ZHU P Y, ZHOU H B, ZHANG HY, et al. Visual simultaneous localization and mapping method for mobile robot[J/OL]. Journal of Tianjin University of Technology: 1-10[2025-01-05]. https://link.cnki.net/urlid/12.1374.N.20240514.1049.004.
[8] 张干, 周非, 张阔, 等. 基于语义和几何一致性的视觉SLAM回环检测算法[J]. 计算机工程与应用, 2024, 60(20): 180-188.
ZHANG G, ZHOU F, ZHANG K, et al. Visual SLAM loop closure algorithm based on semantic and geometric consistency[J]. Computer Engineering and Applications, 2024, 60(20): 180-188.
[9] 沈斯杰, 田昕, 魏国亮, 等. 基于2D激光雷达的SLAM算法研究综述[J]. 计算机技术与发展, 2022, 32(1): 13-18.
SHEN S J, TIAN X, WEI G L, et al. Review of SLAM algorithm based on 2D lidar[J]. Computer Technology and Development, 2022, 32(1): 13-18.
[10] PARKINSON B, SPILKER JR J J, AXELRAD P, et al. Global positioning system: theory and applications, volume I[M]. [S. l.]: American Institute of Aeronautics and Astronautics, 1996.
[11] PEREZ-RUIZ M, SLAUGHTER D C, GLIEVER C, et al. Tractor-based real-time kinematic-global positioning system (RTK-GPS) guidance system for geospatial mapping of row crop transplant[J]. Biosystems Engineering, 2012, 111(1): 64-71.
[12] WANG Y S, SONG W W, LOU Y D, et al. Rail vehicle locali-
zation and mapping with LiDAR-vision-inertial-GNSS fusion[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 9818-9825.
[13] CHEN C, ZHU H, WANG L, et al. A stereo visual-inertial SLAM approach for indoor mobile robots in unknown environments without occlusions[J]. IEEE Access, 2019, 7: 185408-185421.
[14] TSCHOPP F, SCHNEIDER T, PALMER A W, et al. Experimental comparison of visual-aided odometry methods for rail vehicles[J]. IEEE Robotics and Automation Letters, 2019, 4(2): 1815-1822.
[15] OTEGUI J, BAHILLO A, LOPETEGI I, et al. A survey of train positioning solutions[J]. IEEE Sensors Journal, 2017, 17(20): 6788-6797.
[16] DENG Z X, SONG H F, HUANG H, et al. Multi-sensor based train localization and data fusion in autonomous train control system[C]//Proceedings of the 2020 Chinese Automation Congress. Piscataway: IEEE, 2020: 5702-5707.
[17] SONG H F, SUN Z Y, WANG H W, et al. Enhancing train position perception through AI-driven multi-source information fusion[J]. Control Theory and Technology, 2023, 21(3): 425-436.
[18] 杜少聪, 张红钢, 王小敏. 基于改进YOLOv5的钢轨表面缺陷检测[J]. 北京交通大学学报, 2023, 47(2): 129-136.
DU S C, ZHANG H G, WANG X M. Rail surface defect detection based on improved YOLOv5[J]. Journal of Beijing Jiaotong University, 2023, 47(2): 129-136.
[19] 刘海斌, 张友兵, 周奎, 等. 改进YOLOv5-S的交通标志检测算法[J]. 计算机工程与应用, 2024, 60(5): 200-209.
LIU H B, ZHANG Y B, ZHOU K, et al. Traffic sign detection algorithm based on improved YOLOv5-S[J]. Computer Engineering and Applications, 2024, 60(5): 200-209.
[20] JIANG P Y, ERGU D J, LIU F Y, et al. A review of YOLO algorithm developments[J]. Procedia Computer Science, 2022, 199: 1066-1073.
[21] CAO J K, PANG J M, WENG X S, et al. Observation-centric SORT: rethinking SORT for robust multi-object tracking[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 9686-9696.
[22] FORTUN D, BOUTHEMY P, KERVRANN C. Optical flow modeling and computation: a survey[J]. Computer Vision and Image Understanding, 2015, 134: 1-21.
[23] FEI C J, ZHANG Q L, CAI Z J, et al. Edge assisted fast optical flow matching SLAM in underground rescue environments[C]//Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision. Singapore: Springer, 2025: 3-17. |