[1] CHENG K P, MOHAN R E, NHAN N H K, et al. Multi-objective genetic algorithm-based autonomous path planning for hinged-Tetro reconfigurable tiling robot[J]. IEEE Access, 2020, 8: 121267-121284.
[2] FERNANDES P B, OLIVEIRA R C L, NETO J V F. Trajectory planning of autonomous mobile robots applying a particle swarm optimization algorithm with peaks of diversity[J]. Applied Soft Computing, 2022, 116: 108108.
[3] YANG H, QI J, MIAO Y, et al. A new robot navigation algorithm based on a double-layer ant algorithm and trajectory optimization[J]. IEEE Transactions on Industrial Electronics, 2018, 66(11): 8557-8566.
[4] WANG J, CHI W, LI C, et al. Neural RRT*: learning-based optimal path planning[J]. IEEE Transactions on Automation Science and Engineering, 2020, 17(4): 1748-1758.
[5] 袁千贺, 魏国亮, 田昕, 等.改进A~*和DWA融合的移动机器人导航算法[J].小型微型计算机系统, 2023, 44(2): 334-339.
YUAN Q H, WEI G L, TIAN X, et al. Mobile robot navigation method based on fusion of improved A* algorithm and dynamic window approach[J]. Journal of Chinese Computer Systems, 2023, 44(2): 334-339.
[6] 李国进, 陈武, 易丐.基于改进人工势场法的移动机器人导航控制[J].计算技术与自动化, 2017, 36(1): 52-56.
LI G J, CHEN W, YI G. Navigation control of mobile robot based on improved artificial potential field method[J]. Computational Technology and Automation, 2017, 36(1): 52-56.
[7] ZHU Y, WANG Z, CHEN C, et al. Rule-based reinforcement learning for efficient robot navigation with space reduction[J]. IEEE/ASME Transactions on Mechatronics, 2021, 27(2): 846-857.
[8] KAMIL F, HONG T S, KHAKSAR W, et al. New robot navigation algorithm for arbitrary unknown dynamic environments based on future prediction and priority behavior[J]. Expert Systems with Applications, 2017, 86: 274-291.
[9] YUAN J, WANG H, ZHANG H, et al. AUV obstacle avoidance planning based on deep reinforcement learning[J]. Journal of Marine Science and Engineering, 2021, 9(11): 1166.
[10] KRELL E, SHETA A, BALASUBRAMANIAN A P R, et al. Collision-free autonomous robot navigation in unknown environments utilizing PSO for path planning[J]. Journal of Artificial Intelligence and Soft Computing Research, 2019, 9(4): 267-282.
[11] K?STNER L, ZHAO X, SHEN Z, et al. Obstacle-aware waypoint generation for long-range guidance of deep-reinforcement?learning?based navigation approaches[J]. arXiv:2109.11639, 2021.
[12] KIM Y H, JANG J I, YUN S. End-to-end deep learning for autonomous navigation of mobile robot[C]//Proceedings of the 2018 IEEE International Conference on Consumer Electronics, 2018: 1-6.
[13] WATKINS?VALLS D, XU J, WAYTOWICH N, et al. Learning your way without map or compass: panoramic target driven visual navigation[C]//Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020: 5816-5823.
[14] MULLER U, BEN J, COSATTO E, et al. Off-road obstacle avoidance through end-to-end learning[C]//Advances in Neural Information Processing Systems 18, 2005.
[15] LONG P, LIU W, PAN J. Deep-learned collision avoidance policy for distributed multiagent navigation[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 656-663.
[16] TAI L, ZHANG J, LIU M, et al. Socially compliant navigation through raw depth inputs with generative adversarial imitation learning[C]//Proceedings of the 2018 IEEE International Conference on Robotics and Automation, 2018: 1111-1117.
[17] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540): 529-533.
[18] JIANG H, WAN K W, WANG H, et al. A dueling twin delayed DDPG architecture for mobile robot navigation[C]//Proceedings of the 2022 17th International Conference on Control, Automation, Robotics and Vision, 2022: 193-197.
[19] CHEN X, SU L, DAI H. Mapless navigation based on continuous deep reinforcement learning[C]//Proceedings of the 2021 China Automation Congress, 2021: 6758-6763.
[20] 刘浚嘉, 付庄, 谢荣理, 等.模糊先验引导的高效强化学习移动机器人导航[J].机械与电子, 2021, 39(8): 72-76.
LIU J J, FU Z, XIE R L, et al. Inexplicit priori guided efficient reinforcement learning for mobile robot navigation[J]. Mechanics and Electronics, 2021, 39(8): 72-76.
[21] 童小龙, 姚明海, 张灿淋.基于未知环境状态新定义及知识启发的机器人导航Q学习算法[J].计算机系统应用, 2014, 23(1): 149-153.
TONG X L, YAO M H, ZHANG C L. A Q-learning algorithm for robot navigation based on a new definition of an unknown environment states and knowledge heuristic[J]. Computer System Applications, 2014, 23(1): 149-153.
[22] CHEN C, LIU Y, KREISS S, et al. Crowd-robot interaction: crowd-aware robot navigation with attention-based deep reinforcement learning[C]//Proceedings of the 2019 International Conference on Robotics and Automation, 2019: 6015-6022.
[23] HU H, ZHANG K, TAN A H, et al. A sim-to-real pipeline for deep reinforcement learning for autonomous robot navigation in cluttered rough terrain[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 6569-6576.
[24] PFEIFFER M, SHUKLA S, TURCHETTA M, et al. Reinforced imitation: sample efficient deep reinforcement learning for mapless navigation by leveraging prior demonstrations[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4423-4430.
[25] K?STNER L, LI J, SHEN Z, et al. Enhancing navigational safety in crowded environments using semantic-deep-reinforcement-learning-based navigation[J]. arXiv:2109.11288, 2021.
[26] TAI L, PAOLO G, LIU M. Virtual-to-real deep reinforcement learning: continuous control of mobile robots for mapless navigation[C]//Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017: 31-36.
[27] LONG P, FAN T, LIAO X, et al. Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning[C]//Proceedings of the2018 IEEE International Conference on Robotics and Automation, 2018: 6252-6259.
[28] LIU L, DUGAS D, CESARI G, et al. Robot navigation in crowded environments using deep reinforcement learning[C]//Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020: 5671-5677.
[29] XIE L, WANG S, ROSA S, et al. Learning with training wheels: speeding up training with a simple controller for deep reinforcement learning[C]//Proceedings of the 2018 IEEE International Conference on Robotics and Automation, 2018: 6276-6283.
[30] XIE L, MIAO Y, WANG S, et al. Learning with stochastic guidance for robot navigation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(1): 166-176.
[31] FAN T, LONG P, LIU W, et al. Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios[J]. The International Journal of Robotics Research, 2020, 39(7): 856-892.
[32] 张俊友, 李鹏飞, 王树凤, 等.基于贝叶斯网络模型的车辆碰撞概率预测[J].广西大学学报(自然科学版), 2018, 43(6): 2332-2340.
ZHANG J Y, LI P F, WANG S F, et al. Prediction of vehicle collision probability based on bayesian networks[J]. Journal of Guangxi University (Natural Science Edition), 2018, 43(6): 2332-2340.
[33] BAEK M, JEONG D, CHOI D, et al. Vehicle trajectory prediction and collision warning via fusion of multisensors and wireless vehicular communications[J]. Sensors, 2020, 20(1): 288.
[34] WANG X, LIU J, QIU T, et al. A real-time collision prediction mechanism with deep learning for intelligent transportation system[J]. IEEE Transactions on Vehicular Technology, 2020, 69(9): 9497-9508.
[35] XIONG X, CHEN L, LIANG J. A new framework of vehicle collision prediction by combining SVM and HMM[J]. IEEE Transactions on Intelligent Transportation Systems, 2017, 19(3): 699-710.
[36] HéBERT A, GUéDON T, GLATARD T, et al. High-resolution road vehicle collision prediction for the city of Montreal[C]//Proceedings of the 2019 IEEE International Conference on Big Data, 2019: 1804-1813. |