计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (1): 1-14.DOI: 10.3778/j.issn.1002-8331.2308-0455
刘铭哲,徐光辉,唐堂,钱晓健,耿明
出版日期:
2024-01-01
发布日期:
2024-01-01
LIU Mingzhe, XU Guanghui, TANG Tang, QIAN Xiaojian, GENG Ming
Online:
2024-01-01
Published:
2024-01-01
摘要: 即时定位与地图构建(simultaneous localization and mapping,SLAM)是自主移动机器人和自动驾驶的关键技术之一,而激光雷达则是支撑SLAM算法运行的重要传感器。基于激光雷达的SLAM算法,对激光雷达SLAM总体框架进行介绍,详细阐述前端里程计、后端优化、回环检测、地图构建模块的作用并总结所使用的算法;按由2D到3D,单传感器到多传感器融合的顺序,对经典的具有代表性的开源算法进行描述和梳理归纳;介绍常用的开源数据集,以及精度评价指标和测评工具;从深度学习、多传感器融合、多机协同和鲁棒性研究四个维度对激光雷达SLAM技术的发展趋势进行展望。
刘铭哲, 徐光辉, 唐堂, 钱晓健, 耿明. 激光雷达SLAM算法综述[J]. 计算机工程与应用, 2024, 60(1): 1-14.
LIU Mingzhe, XU Guanghui, TANG Tang, QIAN Xiaojian, GENG Ming. Review of SLAM Based on Lidar[J]. Computer Engineering and Applications, 2024, 60(1): 1-14.
[1] SMITH R C, CHEESEMAN P. On the representation and estimation of spatial uncertainty[J]. The International Journal of Robotics Research, 1986, 5(4): 56-68. [2] MOKSSIT S, LICEA D B, GUERMAH B, et al. Deep learning techniques for visual SLAM: a survey[J]. IEEE Access, 2023, 11: 20026-20050. [3] 权美香, 朴松昊, 李国. 视觉SLAM综述[J]. 智能系统学报, 2016, 11(6): 768-776. QUAN M X, PIAO S H, LI G. An overview of visual SLAM[J]. CAAI Transactions on Intelligent Systems, 2016, 11(6): 768-776. [4] 黄泽霞, 邵春莉. 深度学习下的视觉SLAM综述[J]. 机器人,2023,45(6):756-768. HUANG Z X, SHAO C L. Survey of visual SLAM based on deep learning[J]. Robot,2023,45(6):756-768. [5] TEE Y K, HAN Y C. Lidar-based 2D SLAM for mobile robot in an indoor environment: a review[C]//2021 International Conference on Green Energy, Computing and Sustainable Technology (GECOST), Miri, Malaysia, 2021: 1-7. [6] 周治国, 曹江微, 邸顺帆. 3D激光雷达SLAM算法综述[J]. 仪器仪表学报, 2021, 42(9): 13-27. ZHOU Z G, CAO J W, DI S F. Overview of 3D lidar SLAM algorithm[J]. Chinese Journal of Scientific Instrument, 2021, 42(9): 13-27. [7] 毛军, 付浩, 褚超群, 等. 惯性/视觉/激光雷达SLAM技术综述[J]. 导航定位与授时, 2022, 9(4): 17-30. MAO J, FU H, CHU C Q, et al. A review of simultaneous localization and mapping based on inertial-visual-lidar fusion[J]. Navigation Positioning & Timing, 2022, 9(4): 17-30. [8] BESL P J, MCKAY N D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 239-256. [9] HONG S, KO H, KIM J. VICP: velocity updating iterative closest point algorithm[C]//2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, 2010: 1893-1898. [10] CHEN Y, MEDIONI G. Object modeling by registration of multiple range images[C]//1991 IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 1991: 2724-2729. [11] CENSI A. An ICP variant using a point-to-line metric[C]//2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 2008: 19-25. [12] SERAFIN J, GRISETTI G. NICP: dense normal based point cloud registration[C]//2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015: 742-749. [13] DESCHAUD J E. IMLS-SLAM: scan-to-model matching based on 3D data[C]//2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, 2018: 2480-2485. [14] SEGAL A, HAEHNEL D, THRUN S. Generalized-ICP[C]//Robotics: Science and Systems, 2009. [15] KOIDE K, YOKOZUKA M, OISHI S, et al. Voxelized GICP for fast and accurate 3D point cloud registration[C]//2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 2021: 11054-11059. [16] BIBER P, STRASSER W. The normal distributions transform: a new approach to laser scan matching[C]//Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, USA, 2003: 2743-2748. [17] MAGNUSSON M, LILIENTHAL A, DUCKETT T. Scan registration for autonomous mining vehicles using 3D-NDT[J]. Journal of Field Robotics, 2007, 24(10): 803-827. [18] ZHANG X, YANG J, ZHANG S, et al. 3D Registration with maximal cliques[EB/OL]. (2023-05-18)[2023-10-28]. https://doi.org/10.48550/arXiv.2305.10854. [19] CHOY C, PARK J, KOLTUN V. Fully convolutional geometric features[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019: 8957-8965. [20] AO S, HU Q, YANG B, et al. SpinNet: learning a general surface descriptor for 3D point cloud registration[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021: 11748-11757. [21] LI Y, HARADA T. Lepard: learning partial point cloud matching in rigid and deformable scenes[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022: 5544-5554. [22] YEW Z J, LEE G H. REGTR: end-to-end point cloud correspondences with Transformers[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022: 6667-6676. [23] 宗文鹏, 李广云, 李明磊, 等. 激光扫描匹配方法研究综述[J]. 中国光学, 2018, 11(6): 914-930. ZONG W P, LI G Y, LI M L, et al. A survey of laser scan matching methods[J]. Chinese Optics, 2018, 11(6): 914-930. [24] ZHAO S, ZHANG H, WANG P, et al. Super odometry: IMU-centric lidar-visual-inertial estimator for challenging environments[C]//2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 2021: 8729-8736. [25] KUMMERLE R, GRISETTI G, STRASDAT H, et al. G2o: a general framework for graph optimization[C]//2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 2011: 3607-3613. [26] KAESS M, RANGANATHAN A, DELLAERT F. iSAM: incremental smoothing and mapping[J]. IEEE Transactions on Robotics, 2008, 24(6): 1365-1378. [27] FORSTER C, CARLONE L, DELLAERT F, et al. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation[C]//Robotics: Science and Systems XI, 2015. [28] ROSEN D M, CARLONE L, BANDEIRA A S, et al. A certifiably correct algorithm for synchronization over the special Euclidean group[EB/OL]. (2017-02-09)[2023-10-28]. https://doi.org/10.48550/arXiv.1611.00128. [29] JURIC A, KENDES F, MARKOVIC I, et al. A comparison of graph optimization approaches for pose estimation in SLAM[C]//2021 44th International Convention on Information, Communication and Electronic Technology (MIPRO), Opatija, Croatia, 2021: 1113-1118. [30] KIM G, KIM A. Scan context: egocentric spatial descriptor for place recognition within 3D point cloud map[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018: 4802-4809. [31] BENTLEY J L. Multidimensional binary search trees used for associative searching[J]. Communications of the ACM, 1975, 18(9): 509-517. [32] KIM G, CHOI S, KIM A. Scan context++: structural place recognition robust to rotation and lateral variations in urban environments[J]. IEEE Transactions on Robotics, 2022, 38(3): 1856-1874. [33] RUSU R B, BLODOW N, BEETZ M. Fast point feature histograms (FPFH) for 3D registration[C]//2009 IEEE International Conference on Robotics and Automation, Kobe, 2009: 3212-3217. [34] RUSU R B, BLODOW N, MARTON Z C, et al. Aligning point cloud views using persistent feature histograms[C]//2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, 2008: 3384-3391. [35] RUSU R B, COUSINS S. 3D is here: point cloud library (PCL)[C]//2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 2011: 1-4. [36] RUAN J, LI B, WANG Y, et al. SLAMesh: real-time lidar simultaneous localization and meshing[EB/OL]. (2023-03-09)[2023-10-28]. https://doi.org/10.48550/arXiv.2303. 05252. [37] HORNUNG A, WURM K M, BENNEWITZ M, et al. OctoMap: an efficient probabilistic 3D mapping framework based on octrees[J]. Autonomous Robots, 2013, 34(3): 189-206. [38] OELSCH M, KARIMI M, STEINBACH E. Init-LOAM: lidar-based localization and mapping with a static self-generated initial map[C]//2021 20th International Conference on Advanced Robotics (ICAR), Ljubljana, Slovenia, 2021: 865-872. [39] MONTEMERLO M, THRUN S, KOLLER D, et al. FastSLAM: a factored solution to the simultaneous localization and mapping problem[C]//Proceedings of AAAI National Conference on Artificial Intelligence, Palo Alto, CA, 2002: 593-598. [40] GRISETTI G, STACHNISS C, BURGARD W. Improved techniques for grid mapping with Rao-Blackwellized particle filters[J]. IEEE Transactions on Robotics, 2007, 23(1): 34-46. [41] KONOLIGE K, GRISETTI G, KüMMERLE R, et al. Efficient sparse pose adjustment for 2D mapping[C]//2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, China, 2010: 22-29. [42] OLSON E B. Real-time correlative scan matching[C]//2009 IEEE International Conference on Robotics and Automation, Kobe, 2009: 4387-4393. [43] KOHLBRECHER S, VON STRYK O, MEYER J, et al. A flexible and scalable SLAM system with full 3D motion estimation[C]//2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan, 2011: 155-160. [44] HESS W, KOHLER D, RAPP H, et al. Real-time loop closure in 2D LIDAR SLAM[C]//2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016: 1271-1278. [45] BEKRAR A, KACEM I, CHU C, et al. A branch and bound algorithm for solving the 2D strip packing problem[C]//2006 International Conference on Service Systems and Service Management, Troyes, France, 2006: 940-946. [46] ZHANG J, SINGH S. LOAM: lidar odometry and mapping in real-time[C]//Robotics: Science and Systems X, 2014. [47] SHAN T, ENGLOT B. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018: 4758-4765. [48] LIN J, ZHANG F. Loam livox: a fast, robust, high-precision lidar odometry and mapping package for lidars of small FoV[C]//2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020: 3126-3131. [49] CHEN K, LOPEZ B T, AGHA-MOHAMMADI A, et al. Direct lidar odometry: fast localization with dense point clouds[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 2000-2007. [50] DELLENBACH P, DESCHAUD J E, JACQUET B, et al. CT-ICP: real-time elastic lidar odometry with loop closure[C]//2022 International Conference on Robotics and Automation(ICRA), Philadelphia, PA, USA, 2022: 5580-5586. [51] WANG H, WANG C, XIE L. Intensity-SLAM: intensity assisted localization and mapping for large scale environment[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1715-1721. [52] LI H, TIAN B, SHEN H, et al. An intensity-augmented lidar-inertial SLAM for solid-state lidars in degenerated environments[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-10. [53] DU W, BELTRAME G. Real-time simultaneous localization and mapping with lidar intensity[C]//2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom, 2023: 4164-4170. [54] SHAN T, ENGLOT B, MEYERS D, et al. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping[C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 2020: 5135-5142. [55] WISTH D, CAMURRI M, DAS S, et al. Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 1004-1011. [56] SHAN T, ENGLOT B, RATTI C, et al. LVI-SAM: tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[C]//2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 2021: 5692-5698. [57] LIN J, ZHENG C, XU W, et al. R2LIVE: a robust, real-time, lidar-inertial-visual tightly-coupled state estimator and mapping[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 7469-7476. [58] LIN J, ZHANG F. R3LIVE: a robust, real-time, RGB-colored, lidar-inertial-visual tightly-coupled state estimation and mapping package[C]//2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 2022: 10672-10678. [59] LIN J, ZHANG F. R3LIVE++: a robust, real-time, radiance reconstruction package with a tightly-coupled lidar-inertial-visual state estimator[EB/OL]. (2022-09-08)[2023-10-28]. https://doi.org/10.48550/arXiv.2209.03666. [60] XU W, ZHANG F. FAST-LIO: a fast, robust lidar-inertial odometry package by tightly-coupled iterated Kalman filter[J].IEEE Robotics and Automation Letters, 2021, 6(2): 3317-3324. [61] XU W, CAI Y, HE D, et al. FAST-LIO2: fast direct lidar-inertial odometry[J]. IEEE Transactions on Robotics, 2022, 38(4): 2053-2073. [62] CAI Y, XU W, ZHANG F. ikd-Tree: an incremental k-d tree for robotic applications[EB/OL]. (2021-02-22)[2023-10-28]. https://doi.org/10.48550/arXiv.2102.10808. [63] BAI C, XIAO T, CHEN Y, et al. Faster-LIO: lightweight tightly coupled lidar-inertial odometry using parallel sparse incremental voxels[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 4861-4868. [64] ZHENG C, ZHU Q, XU W, et al. FAST-LIVO: fast and tightly-coupled sparse-direct lidar-inertial-visual odometry[C]//2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 2022: 4003-4009. [65] HE D, XU W, CHEN N, et al. Point‐LIO: robust high‐bandwidth light detection and ranging inertial odometry[J]. Advanced Intelligent Systems, 2023, 5(7): 2200459. [66] KIM B, JUNG C, SHIM D H, et al. Adaptive keyframe generation based lidar inertial odometry for complex underground environments[C]//2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom, 2023: 3332-3338. [67] CHEN X, L?BE T, MILIOTO A, et al. OverlapNet: loop closing for lidar-based SLAM[C]//Robotics: Science and Systems XVI, 2020. [68] MA J, ZHANG J, XU J, et al. OverlapTransformer: an efficient and yaw-angle-invariant transformer network for lidar-based place recognition[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 6958-6965. [69] CHEN X, MILIOTO A, PALAZZOLO E, et al. SuMa++: efficient lidar-based semantic SLAM[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019: 4530-4537. [70] MILIOTO A, VIZZO I, BEHLEY J, et al. RangeNet ++: fast and accurate lidar semantic segmentation[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019: 4213-4220. [71] LI L, KONG X, ZHAO X, et al. SA-LOAM: semantic-aided lidar SLAM with loop closure[C]//2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 2021: 7627-7634. [72] ZHOU B, LI C, CHEN S, et al. ASL-SLAM: a lidar SLAM with activity semantics-based loop closure[J]. IEEE Sensors Journal, 2023, 23(12): 13499-13510. [73] GUADAGNINO T, CHEN X, SODANO M, et al. Fast sparse lidar odometry using self-supervised feature selection on intensity images[J]. IEEE Robotics and Automation Letters, 2022, 7(3): 7597-7604. [74] DETONE D, MALISIEWICZ T, RABINOVICH A. SuperPoint: self-supervised interest point detection and description[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018. [75] WANG G, WU X, JIANG S, et al. Efficient 3D deep lidar odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022: 1-17. [76] ZHU Y, ZHENG C, YUAN C, et al. CamVox: a low-cost and accurate lidar-assisted visual SLAM system[C]//2021 IEEE International Conference on Robotics and Automation(ICRA), Xi’an, China, 2021: 5049-5055. [77] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, 2012: 3354-3361. [78] GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237. [79] BEHLEY J, GARBADE M, MILIOTO A, et al. SemanticKITTI: a dataset for semantic scene understanding of lidar sequences[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019: 9296-9306. [80] HUANG X, WANG P, CHENG X, et al. The ApolloScape open dataset for autonomous driving and its application[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(10): 2702-2719. [81] NGUYEN T M, YUAN S, CAO M, et al. NTU VIRAL: a visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint[J]. The International Journal of Robotics Research, 2022, 41(3): 270-280. [82] SCHUBERT D, GOLL T, DEMMEL N, et al. The TUM VI benchmark for evaluating vsual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018: 1680-1687. [83] BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. The International Journal of Robotics Research, 2016, 35(10): 1157-1163. [84] MADDERN W, PASCOE G, LINEGAR C, et al. 1 year, 1000 km: the Oxford RobotCar dataset[J]. The International Journal of Robotics Research, 2017, 36(1): 3-15. [85] CHENG Y, JIANG M, ZHU J, et al. Are we ready for unmanned surface vehicles in inland waterways? The USVInland multisensor dataset and benchmark[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 3964-3970. [86] TATENO K, TOMBARI F, LAINA I, et al. CNN-SLAM: real-time dense monocular SLAM with learned depth prediction[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017: 6565-6574. [87] YIN Z, SHI J. GeoNet: unsupervised learning of dense depth, optical flow and camera pose[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018: 1983-1992. [88] GRINVALD M, FURRER F, NOVKOVIC T, et al. Volumetric instance-aware semantic mapping and 3D object discovery[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 3037-3044. [89] CHARLES R Q, SU H, KAICHUN M, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017: 77-85. [90] QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[EB/OL]. (2017-06-07)[2023-10-28]. https://doi.org/10.48550/arXiv. 1706.02413. [91] QIN T, LI P, SHEN S. VINS-Mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. [92] QIN T, PAN J, CAO S, et al. A general optimization-based framework for local odometry estimation with multiple sensors[EB/OL]. (2019-01-11)[2023-10-28]. https://doi.org/10. 48550/arXiv.1901.03638. [93] CAMPOS C, ELVIRA R, RODRIGUEZ J J G, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. [94] LAJOIE P Y, RAMTOULA B, CHANG Y, et al. DOOR-SLAM: distributed, online, and outlier resilient SLAM for robotic teams[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 1656-1663. [95] TIAN Y, CHANG Y, ARIAS F H, et al. Kimera-multi: robust, distributed, dense metric-semantic SLAM for multi-robot systems[EB/OL]. (2021-12-17)[2023-10-28]. https://doi.org/10.48550/arXiv.2011.04087. [96] XU H, LIU P, CHEN X, et al. D2SLAM: decentralized and distributed collaborative visual-inertial SLAM system for aerial swarm[EB/OL]. (2022-11-03)[2023-10-28]. https://doi.org/10.48550/arXiv.2211.01538. [97] ZHU F, REN Y, KONG F, et al. Swarm-LIO: decentralized swarm lidar-inertial odometry[C]//2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom, 2023: 3254-3260. [98] 胡瀚文, 王猛, 程卫平, 等. 水下视觉SLAM的图像滤波除尘与特征增强算法[J]. 机器人, 2023, 45(2): 197-206. HU H W, WANG M, CHENG W P, et al. An image dust-filtering and feature enhancement algorithm for underwater visual SLAM[J]. Robot, 2023, 45(2): 197-206. [99] LI M, ZHU H, YOU S, et al. Efficient laser-based 3D SLAM for coal mine rescue robots[J]. IEEE Access, 2019, 7: 14124-14138. [100] DING L, WANG J, WU Y. Electric power line patrol operation based on vision and laser SLAM fusion perception[C]//2021 IEEE 4th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 2021: 125-129. |
[1] | 王晓艳, 曹德欣. 基于进化能力的多策略粒子群优化算法[J]. 计算机工程与应用, 2023, 59(5): 78-86. |
[2] | 汪明明, 陈庆奎, 付直兵. KPP3D:基于关键点信息融合的3D目标检测模型[J]. 计算机工程与应用, 2023, 59(17): 195-204. |
[3] | 江友华, 谭杰, 赵乐, 江相伟, 邹华菁. 基于三角散度和信念熵的冲突证据融合算法[J]. 计算机工程与应用, 2023, 59(12): 132-140. |
[4] | 陈博文, 邹海. 总结性自适应变异的粒子群算法[J]. 计算机工程与应用, 2022, 58(8): 67-75. |
[5] | 董荣, 厉茂海, 林睿, 刘仕琦, 丁文. 多相机与IMU融合的室外机器人定位方法研究[J]. 计算机工程与应用, 2022, 58(3): 289-296. |
[6] | 王凡, 赵宏伟, 刘俊博, 王胜春, 武斯全, 苏文婧, 李唯一. 高速铁路运行环境视频自适应去模糊方法[J]. 计算机工程与应用, 2022, 58(21): 258-263. |
[7] | 回立川, 陈雪莲, 孟嗣博. 多策略混合的改进麻雀搜索算法[J]. 计算机工程与应用, 2022, 58(16): 71-83. |
[8] | 秋兴国, 王瑞知, 张卫国, 张昭昭, 张婧. 基于混合策略改进的鲸鱼优化算法[J]. 计算机工程与应用, 2022, 58(1): 70-78. |
[9] | 张珍珍, 贺兴时, 于青林, 杨新社. 多阶段动态扰动和动态惯性权重的布谷鸟算法[J]. 计算机工程与应用, 2022, 58(1): 79-88. |
[10] | 李中道,刘元盛,常飞翔,张军,路铭. 室内环境下UWB与LiDAR融合定位算法研究[J]. 计算机工程与应用, 2021, 57(6): 260-266. |
[11] | 张公凯,陈才学,郑拓. 改进鲸鱼算法在电动汽车有序充电中的应用[J]. 计算机工程与应用, 2021, 57(4): 272-278. |
[12] | 陈雷,尹钧圣. 高斯差分变异和对数惯性权重优化的鲸群算法[J]. 计算机工程与应用, 2021, 57(2): 77-90. |
[13] | 陈承隆,邱志成,杜启亮,田联房,林斌,李淼. 基于Netvlad神经网络的室内机器人全局重定位方法[J]. 计算机工程与应用, 2020, 56(9): 175-182. |
[14] | 刘根旺,周颖,张磊,康增信. 基于博弈论的人员疏散演化研究[J]. 计算机工程与应用, 2020, 56(8): 49-54. |
[15] | 杨爽,曾碧,何炜婷. 融合语义激光与地标信息的SLAM技术研究[J]. 计算机工程与应用, 2020, 56(18): 262-271. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||