[1] SMITH R, SELF M, CHEESEMAN P. Estimating uncertain spatial relationships in robotics[J]. Machine Intelligence & Pattern Recognition, 1988, 5(5): 435-461.
[2] CAMPOS C, ELVIRA R, RODRíGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.
[3] QIN T, LI P L, SHEN S J. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.
[4] SUN K, MOHTA K, PFROMMER B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972.
[5] MUR-ARTAL R, TARDóS J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803.
[6] KUNDU A, KRISHNA K M, SIVASWAMY J. Moving object detection by multi-view geometric techniques from a single camera mounted robot[C]//Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2009: 4306-4312.
[7] DING X Y. Moving object detection in dynamic background[J]. Journal of Physics: Conference Series, 2021, 1955(1): 012113.
[8] 林凯, 梁新武, 蔡纪源. 基于重投影深度差累积图与静态概率的动态RGB-D SLAM算法[J]. 浙江大学学报 (工学版), 2022, 56(6): 1062-1070.
LIN K, LIANG X W, CAI J Y. Dynamic RGB-D SLAM algorithm based on reprojection depth difference cumulative map and static probability[J]. Journal of Zhejiang University (Engineering Science), 2022, 56(6): 1062-1070.
[9] 魏彤, 李绪. 动态环境下基于动态区域剔除的双目视觉SLAM算法[J]. 机器人, 2020, 42(3): 336-345.
WEI T, LI X. Binocular vision SLAM algorithm based on dynamic region elimination in dynamic environment[J]. Robot, 2020, 42(3): 336-345.
[10] BRASCH N, BOZIC A, LALLEMAND J, et al. Semantic monocular SLAM for highly dynamic environments[C]//Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2018: 393-400.
[11] BESCOS B, FáCIL J M, CIVERA J, et al. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083.
[12] BESCOS B, CAMPOS C, TARDóS J D, et al. DynaSLAM II: tightly-coupled multi-object tracking and SLAM[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 5191-5198.
[13] CUI L, MA C. SOF-SLAM: a semantic visual SLAM for dynamic environments[J]. IEEE Access, 2019, 7: 166528-166539.
[14] 王富强, 王强, 李敏, 等. 基于动态分级的自适应运动目标处理SLAM算法[J]. 计算机应用研究, 2023, 40(8): 2361-2366.
WANG F Q, WANG Q, LI M, et al. Adaptive moving object processing SLAM algorithm based on scene dynamic classification[J]. Application Research of Computers, 2023, 40(8): 2361-2366.
[15] YU C, LIU Z, LIU X J, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments[C]//Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2018: 1168-1174.
[16] MA J Y, ZHAO J, TIAN J W, et al. Robust point matching via vector field consensus[J]. IEEE Transactions on Image Processing, 2014, 23(4): 1706-1721.
[17] DEMPSTER A P, LAIRD N M, RUBIN D B. Maximum likelihood from incomplete data via the EM algorithm[J]. Journal of the Royal Statistical Society Series B: Statistical Methodology, 1977, 39(1): 1-22.
[18] BAKER S, MATTHEWS I. Lucas-Kanade 20 years on: a unifying framework[J]. International Journal of Computer Vision, 2004, 56(3): 221-255.
[19] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]//Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2012: 573-580.
[20] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2012: 3354-3361.
[21] BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. International Journal of Robotics Research, 2016, 35(10): 1157-1163. |