[1] SAPUTRA M R U, MARKHAM A, TRIGONI N. Visual SLAM and structure from motion in dynamic environments: a survey[J]. ACM Computing Surveys, 2019, 51(2): 1-36.
[2] CAMPOS C, ELVIRA R, RODRIGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial and multi-map SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.
[3] KLEIN G, MURRAY D. Parallel tracking and mapping for small AR workspaces[C]//Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007: 225-234.
[4] FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: fast semi-direct monocular visual odometry[C]//Proceedings of the IEEE International Conference on Robotics and Automation, 2014: 15-22.
[5] HARTLEY R, ZISSERMAN A. Multiple view geometry in computer vision[M]. Cambridge: Cambridge University Press, 2003.
[6] WU W, GUO L, GAO H, et al. YOLO-SLAM: a semantic SLAM system towards dynamic environment with geometric constraint[J]. Neural Computing and Applications, 2022, 34(8): 6011-6026.
[7] LI G H, CHEN S L. Visual SLAM in dynamic scenes based on object tracking and static points detection[J]. Journal of Intelligent &Robotic Systems, 2022, 104: 33.
[8] HU X, ZHANG Y Z, CAO Z Z, et al. CFP-SLAM: a real-time visual SLAM based on coarse-to-fine probability in dynamic environments[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022: 4399-4406.
[9] YU C, LIU Z, LIU X J, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018: 1168-1174.
[10] BESCOS B, FACIL J M, CIVERA J, et al. DynaSLAM: tracking, mapping and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083.
[11] BESCOS B, CAMPOS C, TARDóS J D, et al. DynaSLAM II: tightly-coupled multi-object tracking and SLAM[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 5191-5198.
[12] QIU Y, WANG C, WANG W, et al. AirDOS: dynamic SLAM benefits from articulated objects[C]//Proceedings of the International Conference on Robotics and Automation, 2022: 8047-8053.
[13] ZHUANG Y, JIA P, LIU Z, et al. Amos-SLAM: an anti-dynamics two-stage RGB-D SLAM approach[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1-10.
[14] ZHU J, LI H, ZHANG T. Camera, LiDAR, and IMU based multi-sensor fusion SLAM: a survey[J]. Tsinghua Science and Technology, 2023, 29(2): 415-429.
[15] LIU J, LI X, LIU Y, et al. RGB-D inertial odometry for a resource-restricted robot in dynamic environments[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 9573-9580.
[16] SONG S, LIM H, LEE A J, et al. DynaVINS: a visual-inertial slam for dynamic environments[J]. IEEE Robotics and Automation Letters, 2022, 7(4): 11523-11530.
[17] WONG Y S, LI C, NIESSNER M, et al. RigidFusion: RGB‐D scene reconstruction with rigidly‐moving objects[J]. Computer Graphics Forum, 2021, 40(2): 511-522.
[18] 郭瑞奇, 修睿, 孙勇, 等. 面向动态环境的紧耦合视觉惯性SLAM改进算法[J]. 计算机工程与应用, 2025, 61(4): 339-348.
GUO R Q, XIU R, SUN Y, et al. A improved tightly-coupled visual-inertial SLAM algorithm for dynamic environment[J]. Computer Engineering and Applications, 2025, 61(4): 339-348.
[19] 高贵, 伍宣衡, 王忠美, 等. V-SLAM深度学习闭环检测研究进展与展望[J]. 计算机工程与应用, 2022, 58(11): 47-59.
GAO G, WU X H, WANG Z M, et al. Research progress and prospect of V-SLAM deep learning loop closure detection[J]. Computer Engineering and Applications, 2022, 58(11): 47-59.
[20] YAN D, LI T, SHI C. Enhanced online calibration and initialization of visual-inertial SLAM system leveraging the structure information[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-15.
[21] QIN T, LI P, SHEN S. VINS-Mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. |