计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (8): 173-182.DOI: 10.3778/j.issn.1002-8331.2312-0362

• 模式识别与人工智能 • 上一篇    下一篇

基于自身运动约束的RGB-D动态SLAM算法

张志铭,张军,刘元盛,王子钰,赵玉洁   

  1. 1.北京市信息服务工程重点实验室,北京 100101
    2.北京联合大学 机器人学院,北京 100101
    3.北京联合大学 城市轨道交通与物流学院,北京 100101
  • 出版日期:2025-04-15 发布日期:2025-04-15

RGB-D Dynamic SLAM Algorithm Based on Own Motion Constraints

ZHANG Zhiming, ZHANG Jun, LIU Yuansheng, WANG Ziyu, ZHAO Yujie   

  1. 1.Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing 100101, China
    2.College of Robotics, Beijing Union University, Beijing 100101, China
    3.College of Urban Rail Transit and Logistics, Beijing Union University, Beijing 100101, China
  • Online:2025-04-15 Published:2025-04-15

摘要: 针对视觉SLAM(simultaneous localization and mapping)算法在动态环境下容易出现重定位失败的问题,提出了一种基于自身运动约束的动态SLAM算法。采用YOLOv5s初步区分前景与背景特征点,仅利用背景特征点进行位姿初始化;利用IMU位姿信息不受动态环境影响的特性,计算每个特征点的自身运动约束值;根据背景特征点的特征约束结果设计了动态概率模型,自适应确定当前帧特征点约束的动态阈值,去除动态特征点并更新相机位姿。使用仿真数据集和真实环境数据集进行了实验验证。实验结果表明,在仿真数据集中,该方法能去除沿极线运动的特征点,相较于Dyna-SLAM,在均方根误差和标准差两项指标的提升率为87.81%和83.17%,相较于AirDos提升率分别为51.62%和41.91%。真实动态环境中重定位能力强于Dyna-SLAM,较AirDos无明显精度提升但运行速度提升29.48%。

关键词: 动态环境, 视觉SLAM, 多传感器融合, IMU预积分

Abstract: Aiming at the problem that visual SLAM algorithms are prone to repositioning failure in dynamic environments, a dynamic SLAM algorithm based on its own motion constraints is proposed. Firstly, YOLOv5s is used to initially distinguish foreground and background feature points, and only the background feature points are used for position initialization. Secondly, the IMU position information is not affected by the dynamic environment, and the own motion constraint value of each feature point is calculated. Finally, a dynamic probability model is designed based on the result of feature constraints on the background feature points, and the dynamic threshold of the feature point constraints of the current frame is adaptively determined to remove the dynamic feature points and update the camera position.  Experimental validation is carried out using the simulation datasets and the real environment datasets.  Experimental results show that in the simulation data set, this method can remove feature points that move along polar lines.  Compared with Dyna-SLAM, the improvement rates in the root mean square error and standard deviation are 87.81% and 83.17%, respectively.  Compared with AirDos, the improvement rates are 51.62% and 41.91% respectively. In the real dynamic environment, its repositioning ability is stronger than that of Dyna-SLAM and there is no obvious accuracy improvement but the running speed is increased by 29.48%.

Key words: dynamic scenes, visual SLAM, multi-sensor fusion, IMU pre-integration