
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (2): 1-18.DOI: 10.3778/j.issn.1002-8331.2406-0038
刘悦,李化义,张世杰,张超,赵祥天
出版日期:2025-01-15
发布日期:2025-01-15
LIU Yue, LI Huayi, ZHANG Shijie, ZHANG Chao, ZHAO Xiangtian
Online:2025-01-15
Published:2025-01-15
摘要: 视觉惯导系统是导航中应用最广泛的基础传感器系统之一,其单独使用或与其他系统联合使用是目前传感器配置的主要方案。初始化技术作为决定导航性能的前置环节,决定了导航的成败,也因此被大量研究。目前与初始化相关的方法种类丰富、工程性强,但是缺少较为全面和系统的综述;同时,尽管目前已经有很成熟的初始化范式,但是在实际使用时,初始化环节仍然有较高的失败率,且一些展示良好导航效果的初始化方法并未公开。随着任务的升级与技术的革新,新型视惯系统与人工智能也为初始化环节带来了新的提升与挑战。对视惯导航系统与初始化技术进行概述,然后针对初始化环节中的关键技术进行详细的调研与讨论,在给出经典初始化框架后,进行了总结与展望。
刘悦, 李化义, 张世杰, 张超, 赵祥天. 面向视觉惯导的导航系统初始化技术综述[J]. 计算机工程与应用, 2025, 61(2): 1-18.
LIU Yue, LI Huayi, ZHANG Shijie, ZHANG Chao, ZHAO Xiangtian. Review of Initialization Technology for Visual-Inertial Navigation Systems[J]. Computer Engineering and Applications, 2025, 61(2): 1-18.
| [1] 吴涛, 谢志军. 一种改进的视觉/IMU导航系统初始化方 法[J]. 压电与声光, 2021, 43(6): 863-868. WU T, XIE Z J. An improved initialization method of visual/IMU navigation system initialization[J]. Piezoelectrics & Acoustooptics, 2021, 43(6): 863-868. [2] HESCH J A, KOTTAS D G, BOWMAN S L, et al. Consistency analysis and improvement of vision-aided inertial navigation[J]. IEEE Transactions on Robotics, 2014, 30(1): 158-176. [3] ARTH C, PIRCHHEIM C, VENTURA J, et al. Instant outdoor localization and SLAM initialization from 2.5D maps[J]. IEEE Transactions on Visualization and Computer Graphics, 2015, 21(11): 1309-1318. [4] CIVERA J, DAVISON A J, MONTIEL J M M. Interacting multiple model monocular SLAM[C]//Proceedings of the 2008 IEEE International Conference on Robotics and Automation, 2008: 3704-3709. [5] GAUGLITZ S, SWEENEY C, VENTURA J, et al. Live tracking and mapping from both general and rotation-only camera motion[C]//Proceedings of the2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2012: 13-22. [6] DOMíNGUEZ-CONTI J, YIN J, ALAMI Y, et al. Visual-inertial SLAM initialization: a general linear formulation and a gravity-observing non-linear optimization[C]//Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2018: 37-45. [7] 刘哲, 史殿习, 杨绍武, 等. 视觉惯性导航系统初始化方法综述[J]. 国防科技大学学报, 2023, 45(2): 15-26. LIU Z, SHI D X, YANG S W, et al. Review of visual-inertial navigation system initialization method[J]. Journal of National University of Defense Technology, 2023, 45(2): 15-26. [8] QIN T, SHEN S. Robust initialization of monocular visual-inertial estimation on aerial robots[C]//Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017: 4225-4232. [9] FU B, HAN F, WANG Y, et al. High-precision multicamera-assisted camera-IMU calibration: theory and method[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-17. [10] 杨梦佳. 基于惯导与双目视觉融合的 SLAM 技术研究 [D]. 西安: 西安科技大学, 2020. YANG M J. Research on SLAM technology based on fusion of inertial navigation and binocular vision[D]. Xi’an: Xi’an University of Science and Technology, 2020. [11] FAN Y, WANG R, MAO Y. Stereo visual inertial odometry with online baseline calibration[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020: 1084-1090. [12] PAUL M K, ROUMELIOTIS S I. Alternating-stereo VINS: observability analysis and performance evaluation[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 4729-4737. [13] LIU Y, WANG F, ZHANG W, et al. Online self-calibration initialization for multi-camera visual-inertial SLAM[C]//Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2018: 192-199. [14] YANG Z, LIU T, SHEN S. Self-calibrating multi-camera visual-inertial fusion for autonomous MAVs[C]//Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016: 4984-4991. [15] ECKENHOFF K, GENEVA P, BLOECKER J, et al. Multi-camera visual-inertial navigation with online intrinsic and extrinsic calibration[C]//Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), 2019: 3158-3164. [16] ECKENHOFF K, GENEVA P, HUANG G. MIMC-VINS: a versatile and resilient multi-IMU multi-camera visual-inertial navigation system[J]. IEEE Transactions on Robotics, 2021, 37(5): 1360-1380. [17] LI A, ZOU D, YU W. Robust initialization of multi-camera slam with limited view overlaps and inaccurate extrinsic calibration[C]//Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021: 3361-3367. [18] HWANGBO M, KIM J S, KANADE T. IMU self-calibration using factorization[J]. IEEE Transactions on Robotics, 2013, 29(2): 493-507. [19] WANG Z, YANG K, SHI H, et al. LF-VIO: a visual-inertial-odometry framework for large field-of-view cameras with negative plane[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022: 4423-4430. [20] ZHAO X, LI Q, WANG C, et al. Robust depth-aided rgbd-inertial odometry for indoor localization[J]. Measurement, 2023, 209: 112487. [21] CHEN P, GUAN W, LU P. ESVIO: event-based stereo visual inertial odometry[J]. IEEE Robotics and Automation Letters, 2023, 8(6): 3661-3668. [22] LEE W, ECKENHOFF K, GENEVA P, et al. Intermittent GPS-aided VIO: online initialization and calibration[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020: 5724-5731. [23] HAN B, XIAO Z, HUANG S, et al. Multi-layer VI-GNSS global positioning framework with numerical solution aided MAP initialization[C]//Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021: 5448-5455. [24] TKOCZ M, JANSCHEK K. Closed-form metric velocity and landmark distance determination utilizing monocular camera images and IMU data in the presence of gravity[C]//Proceedings of the 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2014: 8-13. [25] WANG Y, MA H. Online spatial and temporal initialization for a monocular visual-inertial-LiDAR system[J]. IEEE Sensors Journal, 2022, 22(2): 1609-1620. [26] SANJUKUMAR N, KOUNDINYA P N, RAJALAKSHMI P. Novel technique for multi sensor calibration of a UAV[C]//Proceedings of the 2020 IEEE International Conference on Computing, Power and Communication Technologies (GUCON), 2020: 778-782. [27] HUANG W, LIU H, WAN W. An online initialization and self-calibration method for stereo visual-inertial odometry[J]. IEEE Transactions on Robotics, 2020, 36(4): 1153-1170. [28] ZHU H, ZHANG G, YE Z, et al. LVIF: a lightweight tightly coupled stereo-inertial SLAM with fisheye camera[J]. Complex & Intelligent Systems, 2024, 10: 763-780. [29] CHEN S, LI X, LI S, et al. Targetless spatiotemporal calibration for multiple heterogeneous cameras and IMUs based on continuous-time trajectory estimation[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-12. [30] REN H, ZHAO Y, LIN T, et al. Research on multi-sensor simultaneous localization and mapping technology for complex environment of construction machinery[J]. Applied Sciences, 2023, 13(14): 8496. [31] YAN D, LI T, SHI C. Enhanced online calibration and initialization of visual-inertial SLAM system leveraging the structure information[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-15. [32] AMARASINGHE C, RATHNAWEERA A, MAITHRIPALA S. U-VIP-SLAM: underwater visual-inertial-pressure slam for navigation of turbid and dynamic environments[J]. Arabian Journal for Science and Engineering, 2024, 49: 3193-3207. [33] LIU J, GAO W, XIE C, et al. Implementation and observability analysis of visual-inertial-wheel odometry with robust initialization and online extrinsic calibration[J]. Robotics and Autonomous Systems, 2024, 176: 104686. [34] WEN T, FANG Y, LU B, et al. LIVER: a tightly coupled lidar-inertial-visual state estimator with high robustness for underground environments[J]. IEEE Robotics and Automation Letters, 2024, 9(3): 2399-2406. [35] REHDER J, SIEGWART R. Camera/IMU calibration revisited[J]. IEEE Sensors Journal, 2017, 17(11): 3257-3268. [36] KANG M C, YOO C H, UHM K H, et al. A robust extrinsic calibration method for non-contact gaze tracking in the 3-D space[J]. IEEE Access, 2018, 6: 48840-48849. [37] REHDER J, NIKOLIC J, SCHNEIDER T, et al. Extending kalibr: calibrating the extrinsics of multiple IMUs and of individual axes[C]//Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016: 4304-4311. [38] FURGALE P, REHDER J, SIEGWART R. Unified temporal and spatial calibration for multi-sensor systems[C]//Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013: 1280-1286. [39] MAYE J, FURGALE P, SIEGWART R. Self-supervised calibration for robotic systems[C]//Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), 2013: 473-480. [40] TAO L, HONGWANG D. A camera-IMU system extrinsic parameter calibration method[C]//Proceedings of the 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 2017: 1063-1066. [41] HOL J D, SCHON T B, GUSTAFSSON F. A new algorithm for calibrating a combined camera and IMU sensor unit[C]//Proceedings of the 2008 10th International Conference on Control, Automation, Robotics and Vision, 2008: 1857-1862. [42] REBELLO J, LI C, WASLANDER S L. DC-VINS: dynamic camera visual inertial navigation system with online calibration[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021: 2559-2568. [43] KIM D, SHIN S, KWEON I S. On-line initialization and extrinsic calibration of an inertial navigation system with a relative preintegration method on manifold[J]. IEEE Transactions on Automation Science and Engineering, 2018, 15(3): 1272-1285. [44] YAN J, DU X, LI H. Research on MIMU online calibration method based on multi-information source[C]//Proceedings of the 2019 IEEE International Conference on Unmanned Systems (ICUS), 2019: 699-703. [45] TANG D, SHEN L, HU T. Online camera-gimbal-odometry system extrinsic calibration for fixed-wing UAV swarms[J]. IEEE Access, 2019, 7: 146903-146913. [46] LI M M A. Online temporal calibration for camera-IMU systems: theory and algorithms[J]. The International Journal of Robotics Research, 2014, 33(7): 947-964. [47] PETIT B, GUILLEMARD R, GAY-BELLILE V. Time shifted IMU preintegration for temporal calibration in incremental visual-inertial initialization[C]//Proceedings of the 2020 International Conference on 3D Vision (3DV), 2020: 171-179. [48] FERTNER A, SJOLUND A. Comparison of various time delay estimation methods by computer simulation[J]. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1986, 34(5): 1329-1330. [49] HARRISON A, NEWMAN P. TICSync: knowing when things happened[C]//Proceedings of the 2011 IEEE International Conference on Robotics and Automation, 2011: 356-363. [50] KELLY J, SUKHATME G S. A general framework for temporal calibration of multiple proprioceptive and exteroceptive sensors[C]//Proceedings of the 12th International Symposium on Experimental Robotics, 2014: 195-209. [51] QIN T, SHEN S. Online temporal calibration for monocular visual-inertial systems[C]//Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 3662-3669. [52] MARZULLO K, OWICKI S. Maintaining the time in a distributed system[C]//Proceedings of the Second Annual ACM Symposium on Principles of Distributed Computing, 1983: 295-305. [53] ZAMAN M, ILLINGWORTH J. Interval-based time synchronisation of sensor data in a mobile robot[C]//Proceedings of the 2004 Intelligent Sensors, Sensor Networks and Information Processing Conference, 2004: 463-468. [54] OLSON E. A passive solution to the sensor synchronization problem[C]//Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010: 1059-1064. [55] VOGES R, WAGNER B. Timestamp offset calibration for an IMU-camera system under interval uncertainty[C]//Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018: 377-384. [56] MAIR E, FLEPS M, SUPPA M, et al. Spatio-temporal initialization for IMU to camera registration[C]//Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, 2011: 557-564. [57] LINDENBERGER P, SARLIN P E, POLLEFEYS M. Lightglue: local feature matching at light speed[J]. arXiv:2306. 13643, 2023. [58] 程珉, 陈临强, 杨全鑫. 基于改进ORB特征的单目视觉SLAM算法[J]. 计算机应用与软件, 2021, 38(10): 242-248. CHENG M, CHEN L Q, YANG Q X. Monocular visual slam algorithm based on improved ORB feature[J]. Computer Applications and Software, 2021, 38(10): 242-248. [59] 肖扬, 周军. 图像边缘检测综述[J]. 计算机工程与应用, 2023, 59(5): 40-54. XIAO Y, ZHOU J. Overview of image edge detection[J]. Computer Engineering and Applications, 2023, 59(5): 40-54. [60] ZHONG F, WANG S, ZHANG Z, et al. Detect-SLAM: making object detection and slam mutually beneficial[C]//Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018: 1001-1010. [61] LUPTON T, SUKKARIEH S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions[J]. IEEE Transactions on Robotics, 2012, 28(1): 61-76. [62] QIN T, LI P, SHEN S. VINS-mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. [63] CAMPOS C, MONTIEL J M, TARDóS J D. Inertial-only optimization for visual-inertial initialization[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020: 51-57. [64] MARTINELLI A. Closed-form solution of visual-inertial structure from motion[J]. International Journal of Computer Vision, 2014, 106: 138-152. [65] WEI H, ZHANG T, ZHANG L. A fast analytical two-stage initial-parameters estimation method for monocular-inertial navigation[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12. [66] MARTINELLI A. Observability properties and deterministic algorithms in visual-inertial structure from motion[J]. Foundations and Trends in Robotics, 2014, 3(3): 139-209. [67] MARTINELLI A. Vision and IMU data fusion: closed-form solutions for attitude, speed, absolute scale, and bias determination[J]. IEEE Transactions on Robotics, 2012, 28(1): 44-60. [68] ZU?IGA-NO?L D, MORENO F A, GONZALEZ-JIMENEZ J. An analytical solution to the IMU initialization problem for visual-inertial systems[J]. IEEE Robotics and Automation Letters, 2021, 6(3): 6116-6122. [69] NISTER D, NARODITSKY O, BERGEN J. Visual odometry[C]//Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004: 652-659. [70] DONG-SI T C, MOURIKIS A I. Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration[C]//Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012: 1064-1071. [71] MUR-ARTAL R, TARDóS J D. Visual-inertial monocular slam with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803. [72] JUNG J H, CHUNG J Y, CHA J, et al. Rapid initialization using relative pose constraints in stereo visual-inertial odometry[C]//Proceedings of the 2019 IEEE 15th International Conference on Control and Automation (ICCA), 2019: 969-974. [73] ZHENG T, ZUO G, XU Z, et al. Rotation matrix oriented visual inertial online initialization[C]//Proceedings of the 2020 IEEE International Conference on Real-time Computing and Robotics (RCAR), 2020: 128-133. [74] HUANG W, LIU H. Online initialization and automatic camera-IMU extrinsic calibration for monocular visual-inertial SLAM[C]//Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018: 5182-5189. [75] KAISER J, MARTINELLI A, FONTANA F, et al. Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation[J]. IEEE Robotics and Automation Letters, 2017, 2(1): 18-25. [76] 王文森, 黄凤荣, 王旭, 等. 基于深度学习的视觉惯性里程计技术综述[J]. 计算机科学与探索, 2023, 17(3): 549-560. WANG W S, HUANG F R, WANG X, et al. Overview of visual inertial odometry technology based on deep learning[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(3): 549-560. [77] PAN Y, ZHOU W, CAO Y, et al. Adaptive VIO: deep visual-inertial odometry with online continual learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024: 18019-18028. [78] KRICHEN M. Generative adversarial networks[C]//Proceedings of the 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), 2023: 1-7. [79] TIAN C, XU Y, LI Z, et al. Attention-guided CNN for image denoising[J]. Neural Networks, 2020, 124: 117-129. [80] BROSSARD M, BARRAU A, BONNABEL S. AI-IMU dead-reckoning[J]. IEEE Transactions on Intelligent Vehicles, 2020, 5(4): 585-595. [81] ZHANG M, ZHANG M, CHEN Y, et al. IMU data processing for inertial aided navigation: a recurrent neural network based approach[C]//Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021: 3992-3998. [82] BUCHANAN R, AGRAWAL V, CAMURRI M, et al. Deep IMU bias inference for robust visual-inertial odometry with factor graphs[J]. IEEE Robotics and Automation Letters, 2023, 8(1): 41-48. [83] LI R, FU C, YI W, et al. Calib-Net: calibrating the low-cost IMU via deep convolutional neural network[J]. Front Robot AI, 2022, 8: 772583. [84] GAO Y, SHI D, LI R, et al. Gyro-Net: IMU gyroscopes random errors compensation method based on deep learning[J]. IEEE Robotics and Automation Letters, 2023, 8(3): 1471-1478. [85] DUSMANU M, ROCCO I, PAJDLA T, et al. D2-Net: a trainable cnn for joint description and detection of local features[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019: 8084-8093. [86] DETONE D, MALISIEWICZ T, RABINOVICH A. SuperPoint: self-supervised interest point detection and description[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018: 337-33712. [87] WANG K, ZHANG C, SUN K, et al. VINS-FEN: monocular visual-inertial slam based on feature extraction network[C]//Proceedings of the 2023 7th International Conference on Machine Vision and Information Technology (CMVIT), 2023: 86-91. [88] YANG M, CHEN Y, KIM H S. Efficient deep visual and inertial odometry with adaptive visual modality selection[C]//Proceedings of the 17th European Conference on Computer Visio. Cham: Springer Nature Switzerland, 2022: 233-250. [89] HAN L, LIN Y, DU G, et al. DeepVIO: self-supervised deep learning of monocular visual inertial odometry using 3D geometric constraints[C]//Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019: 6906-6913. [90] CLARK R, WANG S, WEN H, et al. VINet: visual-inertial odometry as a sequence-to-sequence learning problem[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2017: 3995-4001. [91] LIU L, LI G, LI T H. ATVIO: attention guided visual-inertial odometry[C]//Proceedings of the ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021: 4125-4129. [92] ALMALIOGLU Y, TURAN M, SAPUTRA M R U, et al. SelfVIO: self-supervised deep monocular visual-inertial odometry and depth estimation[J]. Neural Networks, 2022, 150: 119-136. [93] CHEN J, ZHANG S, LI Z, et al. Gravity-Shift-VIO: adaptive acceleration shift and multi-modal fusion with transformer in visual-inertial odometry[C]//Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), 2023: 1-8. [94] MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint kalman filter for vision-aided inertial navigation[C]//Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007: 3565-3572. [95] SUN K, MOHTA K, PFROMMER B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972. [96] HUANG G P, MOURIKIS A I, ROUMELIOTIS S I. A first-estimates Jacobian EKF for improving SLAM consistency[C]//Proceedings of the Eleventh International Symposium on Experimental Robotics, 2009: 373-382. [97] GENEVA P, ECKENHOFF K, LEE W, et al. OpenVINS: a research platform for visual-inertial estimation[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020: 4666-4672. [98] LI J, BAO H, ZHANG G. Rapid and robust monocular visual-inertial initialization with gravity estimation via vertical edges[C]//Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019: 6230-6236. [99] LEUTENEGGER S, LYNEN S, BOSSE M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, 2015, 34(3): 314-334. [100] 齐锦博. 造船起重机检测爬壁机器人视觉惯性SLAM的研究[D]. 上海: 上海电机学院, 2022. QI J B. Research on visual-inertial SLAM of ship-building crane detection wall-climbing robot[D]. Shanghai: Shanghai Dianji University, 2022. [101] MUR-ARTAL R, MONTIEL J M M, TARDóS J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. [102] HORN B K P. Closed-form solution of absolute orientation using unit quaternions[J]. Journal of The Optical Society of America Aoptics Image Science and Vision, 1987, 4: 629-642. [103] MUR-ARTAL R, TARDóS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. [104] CAMPOS C, ELVIRA R, RODRíGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. [105] ELVIRA R, TARDóS J D, MONTIEL J. ORBSLAM-Atlas: a robust and accurate multi-map system[C]//Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019: 6253-6259. [106] URBAN S, LEITLOFF J, HINZ S. MLPnP—a real-time maximum likelihood solution to the perspective-n-point problem[J]. arXiv:1607.08112, 2016. |
| [1] | 李明桂, 周焕银, 龚利文. ROV水下障碍物检测和避障技术的应用综述[J]. 计算机工程与应用, 2024, 60(17): 34-47. |
| [2] | 周雅兰, 宋晓鸥. 利用机器学习的GNSS欺骗检测综述[J]. 计算机工程与应用, 2024, 60(17): 62-73. |
| [3] | 苏尤丽, 胡宣宇, 马世杰, 张雨宁, 阿布都克力木·阿布力孜, 哈里旦木·阿布都克里木. 人工智能在中医诊疗领域的研究综述[J]. 计算机工程与应用, 2024, 60(16): 1-18. |
| [4] | 宋伟, 张杨豪. 构音障碍语音识别算法研究综述[J]. 计算机工程与应用, 2024, 60(11): 62-74. |
| [5] | 邹汶材, 刘宝临. 基于图像实例分割的机器人箱体拆垛方法[J]. 计算机工程与应用, 2024, 60(10): 209-216. |
| [6] | 于军琪, 陈易圣, 冯春勇, 苏煜聪, 郭聚刚. 智能建造机器人局部路径规划研究综述[J]. 计算机工程与应用, 2024, 60(10): 16-29. |
| [7] | 黄志滨, 陈桪. 基于PSO-GA算法的无人机集群森林火灾探查方法[J]. 计算机工程与应用, 2023, 59(9): 289-294. |
| [8] | 陈吉尚, 哈里旦木·阿布都克里木, 梁蕴泽, 阿布都克力木·阿布力孜, 米克拉依·艾山, 郭文强. 深度学习在符号音乐生成中的应用研究综述[J]. 计算机工程与应用, 2023, 59(9): 27-45. |
| [9] | 李瑾晨, 李艳玲, 葛凤培, 林民. 面向法律领域的智能系统研究综述[J]. 计算机工程与应用, 2023, 59(7): 31-50. |
| [10] | 孙书魁, 范菁, 李占稳, 曲金帅, 路佩东. 人工智能在新型冠状病毒肺炎中的研究综述[J]. 计算机工程与应用, 2023, 59(5): 28-39. |
| [11] | 李文静, 白静, 彭斌, 杨瞻源. 图卷积神经网络及其在图像识别领域的应用综述[J]. 计算机工程与应用, 2023, 59(22): 15-35. |
| [12] | 顾剑, 钱育蓉, 王兰兰, 胡月, 陈嘉颖, 冷洪勇, 马梦楠. 人工智能在功能磁共振成像数据中的自闭症研究综述[J]. 计算机工程与应用, 2023, 59(22): 57-68. |
| [13] | 赵延玉, 赵晓永, 王磊, 王宁宁. 可解释人工智能研究综述[J]. 计算机工程与应用, 2023, 59(14): 1-14. |
| [14] | 祁泽政, 徐银霞. 改进YOLOv5s算法的安全帽佩戴检测研究[J]. 计算机工程与应用, 2023, 59(14): 176-183. |
| [15] | 赵立阳, 常天庆, 褚凯轩, 郭理彬, 张雷. 完全合作类多智能体深度强化学习综述[J]. 计算机工程与应用, 2023, 59(12): 14-27. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||