[1] 张斌, 何明, 陈希亮, 等. 改进DDPG算法在自动驾驶中的应用[J].计算机工程与应用, 2019, 55(10): 264-270.
ZHANG B, HE M, CHEN X L, et al. Self-driving via improved DDPG algorithm[J]. Computer Engineering and Applications, 2019, 55(10): 264-270.
[2] PAXTON C, RAMAN V, HAGER G D, et al. Combining neural networks and tree search for task and motion planning in challenging environments[C]//Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017: 6059-6066.
[3] HU X, TANG B, CHEN L, et al. Learning a deep cascaded neural network for multiple motion commands prediction in autonomous driving[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 22(12): 7585-7596.
[4] CHEN L, HU X, TIAN W, et al. Parallel planning: a new motion planning framework for autonomous driving[J]. IEEE/CAA Journal of Automatica Sinica, 2018, 6(1): 236-246.
[5] SHI T, WANG P, CHENG X, et al. Driving decision and control for autonomous lane change based on deep reinforcement learning[J]. arXiv:1904.10171, 2019.
[6] KAI A, DEISENROTH M P, BRUNDAGE M, et al. Deep reinforcement learning: a brief survey[J].IEEE Signal Processing Magazine, 2017, 34(6): 26-38.
[7] KARAMAN S, WALTER M R, PEREZ A, et al. Anytime motion planning using the RRT[C]//Proceedings of the 2011 IEEE International Conference on Robotics and Automation, 2011: 1478-1483.
[8] FARAG W, SALEH Z. Tuning of PID track followers for autonomous driving[C]//Proceedings of the 2018 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies, 2018: 1-7.
[9] WANG P, GAO S, LI L, et al. Obstacle avoidance path planning design for autonomous driving vehicles based on an improved artificial potential field algorithm[J]. Energies, 2019, 12(12): 2342.
[10] BOJARSKI M, TESTA D D, DWORAKOWSKI D, et al. End to end learning for self-driving cars[J].arXiv: 1604. 07316, 2016.
[11] PAN Y, CHENG C A, SAIGOL K, et al. 2016 imitation learning for agile autonomous driving[J].International Journal of Robotics Research, 2020, 39(2/3): 286-302.
[12] HAWKE J, SHEN R, GURAU C, et al. Urban driving with conditional imitation learning[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation, 2020: 251-257.
[13] WANG P, CHAN C Y, FORTELLE A. A reinforcement learning based approach for automated lane change maneuvers[C]//Proceedings of the 2018 IEEE Intelligent Vehicles Symposium, 2018: 1379-1384.
[14] KENDALL A, HAWKE J, JANZ D, et al. Learning to drive in a day[C]//Proceedings of the 2019 International Conference on Robotics and Automation, 2019: 8248-8254.
[15] BAUTISTA-MONTESANO R, GALLUZZI R, RUAN K, et al. Autonomous navigation at unsignalized intersections: a coupled reinforcement learning and model predictive control approach[J].Transportation Research, Part C: Emerging Technologies, 2022, 139(2):103662.
[16] FUJIMOTO S, HOOF H V, MEGER D.Addressing function approximation error in actor-critic methods[C]//Proceedings of the International Conference on Machine Learning, 2018: 1587-1596.
[17] LIAO Y, YU G, CHEN P, et al.Modelling personalised car-following behaviour:a memory-based deep reinforcement learning approach[J].Transportmetrica A: Transport Science, 2022: 1-29.
[18] HASSELT H V, GUEZ A, SILVER D.Deep reinforcement learning with double Q-learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2016:2094-2100.
[19] WANG Z, SCHAUL T, HESSEL M, et al. Dueling network architectures for deep reinforcement learning[C]//Proceedings of the International Conference on Machine Learning, 2016: 1995-2003.
[20] XU H, YANG G, YU F, et al.End-to-end learning of driving models from large-scale video datasets[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2174-2182.
[21] GORI M, MONFARDINI G, SCARSELLI F. A new model for learning in graph domains[C]//Proceedings of the IEEE International Joint Conference on Neural Networks, 2005: 729-734.
[22] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[C]//Proceedings of the International Conference on Learning Representation, 2017.
[23] ZHAO L, SONG Y, ZHANG C, et al. T-GCN: a temporal graph convolutional network for traffic prediction[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 21(9):3848-3858.
[24] LIU Q, LI X, YUAN S, et al. Decision-making technology for autonomous vehicles learning-based methods, applications and future outlook[C]//Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference, 2021: 30-37.
[25] 何倩倩, 孙静宇, 曾亚竹. 基于邻域感知图神经网络的会话推荐[J]. 计算机工程与应用, 2022, 58(9): 107-115.
HE Q Q, SUN J Y, ZENG Y Z. Neighborhood awareness graph neural networks for session-based recommendation[J].Computer Engineering and Applications, 2022, 58(9): 107-115.
[26] 刘鑫, 梅红岩, 王嘉豪, 等. 图神经网络推荐方法研究[J].计算机工程与应用, 2022, 58(10): 41-49.
LIU X, MEI H Y, WANG J H, et al. Research on graph neural network recommendation method[J]. Computer Engineering and Applications, 2022, 58(10): 41-49.
[27] VAN SEIJEN H, FATEMI M, ROMOFF J, et al. Hybrid reward architecture for reinforcement learning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 5398-5408.
[28] LI Q, PENG Z, ZHANG Q, et al.Improving the generalization of end-to-end driving through procedural generation[J]. arXiv:2012.13681, 2020.
[29] MICALE D, COSTANTINO G, MATTEUCCI I, et al. CAHOOT: a context-aware vehicular intrusion detection system[C]//Proceedings of the 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications, 2022: 1211-1218.
[30] PENG Z, LI Q, LIU C, et al. Safe driving via expert guided policy optimization[C]//Proceedings of the Conference on Robot Learning, 2022: 1554-1563.
[31] CHEN Y, HAN W, ZHU Q H, et al. Target-driven obstacle avoidance algorithm based on DDPG for connected autonomous vehicles[J]. EURASIP Journal on Advances in Signal Processing, 2022, 2022(1):1-22.
[32] CHEN L, HU X, TANG B, et al. Conditional DQN-based motion planning with fuzzy logic for autonomous driving[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(4): 2966-2977.
[33] LI Q, PENG Z, FENG L, et al. Metadrive: composing diverse driving scenarios for generalizable reinforcement learning[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022: 3461-3475. |