计算机工程与应用 ›› 2019, Vol. 55 ›› Issue (7): 151-156.DOI: 10.3778/j.issn.1002-8331.1712-0297

• 模式识别与人工智能 • 上一篇    下一篇

优化深度确定性策略梯度算法

柯丰恺,周唯倜,赵大兴   

  1. 湖北工业大学 机械工程学院,武汉 430068
  • 出版日期:2019-04-01 发布日期:2019-04-15

Optimized Deep Deterministic Policy Gradient Algorithm

KE Fengkai, ZHOU Weiti, ZHAO Daxing   

  1. School of Mechanical Engineering, Hubei University of Technology, Wuhan 430068, China
  • Online:2019-04-01 Published:2019-04-15

摘要: 深度强化学习善于解决控制的优化问题,连续动作的控制因为精度的要求,动作的数量随着动作维度的增加呈指数型增长,难以用离散的动作来表示。基于Actor-Critic框架的深度确定性策略梯度(Deep Deterministic Policy Gradient,DDPG)算法虽然解决了连续动作控制问题,但是仍然存在采样方式缺乏科学理论指导、动作维度较高时的最优动作与非最优动作之间差距被忽视等问题。针对上述问题,提出一种基于DDPG算法的优化采样及精确评价的改进算法,并成功应用于选择顺应性装配机器臂(Selective Compliance Assembly Robot Arm,SCARA)的仿真环境中,与原始的DDPG算法对比,取得了良好的效果,实现了SCARA机器人快速自动定位。

关键词: 强化学习, 深度学习, 连续动作控制, 机器臂

Abstract: Deep reinforcement learning is good at solving the optimization problems of control. Because of the accuracy requirements, with the increasing of action dimension, the number of action increases exponentially. So, it is difficult to express the continuous action with discrete action. The Deep Deterministic Policy Gradient(DDPG) algorithm, based on the Actor-Critic framework, solves the problem of continuous motion control. But there are still some problems, such as the lack of scientific theory of sampling, the neglect of the differences between optimal action and non-optimal action when the action dimension is relatively high. In order to solve these problems, this paper presents an improved algorithm with optimal sampling and precise critic for DDPG algorithm. And it is successfully applied to the simulation of Selective Compliance Assembly Robot Arm(SCARA). Compared with DDPG algorithm, an improvement effect is achieved and the SCARA robot is quickly and automatically positioned.

Key words: reinforcement learning, deep learning, continuous action control, robot arm