[1] 刘铭哲, 徐光辉, 唐堂, 等. 激光雷达SLAM算法综述[J]. 计算机工程与应用, 2024, 60(1): 1-14.
LIU M Z, XU G H, TANG T, et al. Review of SLAM based on lidar[J]. Computer Engineering and Applications, 2024, 60(1): 1-14.
[2] 刘志成, 王华龙, 马兴录. 激光即时定位与建图算法综述[J]. 计算机测量与控制, 2024, 32(3): 1-8.
LIU Z C, WANG H L, MA X L. Summary of laser real-time positioning and mapping algorithm[J]. Computer Measurement & Control, 2024, 32(3): 1-8.
[3] MUR-ARTAL R, MONTIEL J M M, TARDóS J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
[4] MUR-ARTAL R, TARDóS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262.
[5] 王朋, 郝伟龙, 倪翠, 等. 视觉SLAM方法综述[J]. 北京航空航天大学学报, 2024, 50(2): 359-367.
WANG P, HAO W L, NI C, et al. An overview of visual SLAM methods[J]. Journal of Beijing University of Aeronautics and Astronautics, 2024, 50(2): 359-367.
[6] QIN T, LI P L, SHEN S J. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.
[7] GENEVA P, ECKENHOFF K, LEE W, et al. OpenVINS: a research platform for visual-inertial estimation[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2020: 4666-4672.
[8] CAMPOS C, ELVIRA R, RODRíGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.
[9] BLOESCH M, OMARI S, HUTTER M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2015: 298-304.
[10] 孙永全, 田红丽. 视觉惯性SLAM综述[J]. 计算机应用研究, 2019, 36(12): 3530-3533.
SUN Y Q, TIAN H L. Overview of visual inertial SLAM[J]. Application Research of Computers, 2019, 36(12): 3530-3533.
[11] 徐少杰, 曹雏清, 王永娟. 视觉SLAM在室内动态场景中的应用研究[J]. 计算机工程与应用, 2021, 57(8): 175-179.
XU S J, CAO C Q, WANG Y J. Application research of visual SLAM in indoor dynamic scenes[J]. Computer Engineering and Applications, 2021, 57(8): 175-179.
[12] RAN T, YUAN L, ZHANG J B, et al. RS-SLAM: a robust semantic SLAM in dynamic environments based on RGB-D sensor[J]. IEEE Sensors Journal, 2021, 21(18): 20657-20664.
[13] ZHAO H S, SHI J P, QI X J, et al. Pyramid scene parsing network[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 6230-6239.
[14] FAN Y C, ZHANG Q C, TANG Y L, et al. Blitz-SLAM: a semantic SLAM in dynamic environments[J]. Pattern Recognition, 2022, 121: 108225.
[15] DVORNIK N, SHMELKOV K, MAIRAL J, et al. BlitzNet: a real-time deep network for scene understanding[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 4174-4182.
[16] WANG Y N, TIAN Y B, CHEN J W, et al. A survey of visual SLAM in dynamic environment: the evolution from geometric to semantic approaches[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1-21.
[17] 刘辉, 张雪波, 李如意, 等. 双目视觉辅助的激光惯导SLAM算法[J]. 控制与决策, 2024, 39(6): 1787-1800.
LIU H, ZHANG X B, LI R Y, et al. Stereo vision aided lidar-inertial SLAM[J]. Control and Decision, 2024, 39(6): 1787-1800.
[18] DANG X W, RONG Z, LIANG X D. Sensor fusion-based approach to eliminating moving objects for SLAM in dynamic environments[J]. Sensors, 2021, 21(1): 230.
[19] XU X B, ZHANG L, YANG J, et al. A review of multi-sensor fusion SLAM systems based on 3D LIDAR[J]. Remote Sensing, 2022, 14(12): 2835.
[20] SONG B Y, YUAN X F, YING Z M, et al. DGM-VINS: visual-inertial SLAM for complex dynamic environments with joint geometry feature extraction and multiple object tracking[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-11.
[21] ZHUANG Y M, JIA P R, LIU Z, et al. Amos-SLAM: an anti-dynamics two-stage RGB-D SLAM approach[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1-10.
[22] ACHANTA R, SHAJI A, SMITH K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274-2282.
[23] 沈晔湖, 陈嘉皓, 李星, 等. 基于几何-语义联合约束的动态环境视觉SLAM算法[J]. 数据采集与处理, 2022, 37(3): 597-608.
SHEN Y H, CHEN J H, LI X, et al. Dynamic visual SLAM based on unified geometric-semantic constraints[J]. Journal of Data Acquisition and Processing, 2022, 37(3): 597-608.
[24] SOLà J. Quaternion kinematics for the error-state Kalman filte[J]. arXiv:1711.02508, 2017.
[25] RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: an efficient alternative to SIFT or SURF[C]//Proceedings of the 2011 International Conference on Computer Vision. Piscataway: IEEE, 2012: 2564-2571.
[26] 张雪涛, 方勇纯, 张雪波, 等. 基于误差状态卡尔曼滤波估计的旋翼无人机输入饱和控制[J]. 机器人, 2020, 42(4): 394-405.
ZHANG X T, FANG Y C, ZHANG X B, et al. Error state Kalman filter estimator based input saturated control for rotorcraft unmanned aerial vehicle[J]. Robot, 2020, 42(4): 394-405.
[27] YANG H, SHI J N, CARLONE L. TEASER: fast and certifiable point cloud registration[J]. IEEE Transactions on Robotics, 2021, 37(2): 314-333.
[28] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2012: 573-580.
[29] ZHANG H, JIN L Q, YE C. The VCU-RVI benchmark: evaluating visual inertial odometry for indoor navigation applications with an RGB-D camera[C]//Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2021: 6209-6214. |