[1] 张煜, 王耀南, 贾林. 异型曲面加工机器人自适应NDO控制[J]. 计算机工程与应用, 2020, 56(12): 256-264.
ZHANG Y, WANG Y N, JIA L. Adaptive nonlinear disturbance observer control for curved surface processing robot[J]. Computer Engineering and Applications, 2020, 56(12): 256-264.
[2] 张伟. 基于改进SSA算法的工业机械臂轨迹规划与应用[D]. 太原: 中北大学, 2023.
ZHANG W. Trajectory planning and application of industrial manipulator based on improved SSA algorithm[D]. Taiyuan: North University of China, 2023.
[3] DE OLIVEIRA D M, CONCEICAO A G S. A fast 6DOF visual selective grasping system using point clouds[J]. Machines, 2023, 11(5): 540.
[4] YAN M Y, LI A, KALAKRISHNAN M, et al. Learning probabilistic multi-modal actor models for vision-based robotic grasping[C]//Proceedings of the 2019 International Conference on Robotics and Automation. Piscataway: IEEE, 2019: 4804-4810.
[5] IQBAL S, TREMBLAY J, CAMPBELL A, et al. Toward sim-to-real directional semantic grasping[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2020: 7247-7253.
[6] KALAKRISHNAN M, RIGHETTI L, PASTOR P, et al. Learning force control policies for compliant manipulation[C]//Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2011: 4639-4644.
[7] JAIN S, ARGALL B. Grasp detection for assistive robotic manipulation[C]//Proceedings of the 2016 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2016: 2015-2021.
[8] LU Q K, CHENNA K, SUNDARALINGAM B, et al. Planning multi-fingered grasps as probabilistic inference in a learned deep network[M]//Robotics research. Cham: Springer, 2020: 455-472.
[9] MOUSAVIAN A, EPPNER C, FOX D. 6-DOF GraspNet: variational grasp generation for object manipulation[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 2901-2910.
[10] XU K C, YU H X, LAI Q N, et al. Efficient learning of goal-oriented push-grasping synergy in clutter[J]. IEEE Robotics and Automation Letters, 2021, 6(4): 6337-6344.
[11] ZENG A, FLORENCE P R, TOMPSON J, et al. Transporter networks: rearranging the visual world for robotic manipulation[C]//Proceedings of the Conference on Robot Learning, 2020.
[12] WANG L R, XIANG Y, YANG W, et al. Goal-auxiliary actor-critic for 6D robotic grasping with point clouds[J]. arXiv:2010.00824, 2020.
[13] WANG L R, MENG X Y, XIANG Y, et al. Hierarchical policies for cluttered-scene grasping with latent plans[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 2883-2890.
[14] LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[J]. arXiv:1509. 02971, 2015.
[15] TORABI F, WARNELL G, STONE P. Behavioral cloning from observation[J]. arXiv:1805.01954, 2018.
[16] BETZ T, FUJIISHI H, KOBAYASHI T. Behavioral cloning from observation with bi-directional dynamics model[C]//Proceedings of the 2021 IEEE/SICE International Symposium on System Integration. Piscataway: IEEE, 2021: 184-189.
[17] PANERATI J, ZHENG H H, ZHOU S Q, et al. Learning to fly: a gym environment with PyBullet physics for reinforcement learning of multi-agent quadcopter control[C]//Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2021: 7512-7519.
[18] WANG L R, XIANG Y, FOX D. Manipulation trajectory optimization with online grasp synthesis and selection[J]. arXiv:1911.10280, 2019.
[19] RATLIFF N, ZUCKER M, BAGNELL J A, et al. CHOMP: gradient optimization techniques for efficient motion planning[C]//Proceedings of the 2009 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2009: 489-494.
[20] LI Y, WANG G, JI X Y, et al. DeepIM: deep iterative matching for 6D pose estimation[J]. International Journal of Computer Vision, 2020, 128(3): 657-678.
[21] CHANG A X, FUNKHOUSER T, GUIBAS L, et al. Shape-Net: an information-rich 3D model repository[J]. arXiv:1512.03012, 2015.
[22] CALLI B, WALSMAN A, SINGH A, et al. Benchmarking in manipulation research: the YCB object and model set and benchmarking protocols[J]. arXiv:1502.03143, 2015.
[23] PLOEGER K, LUTTER M, PETERS J. High acceleration reinforcement learning for real-world juggling with binary rewards[J]. arXiv:2010.13483, 2020.
[24] SEO M, VECCHIETTI L F, LEE S, et al. Rewards prediction-based credit assignment for reinforcement learning with sparse binary rewards[J]. IEEE Access, 2019, 7: 118776-118791.
[25] LASKEY M, LEE J, FOX R, et al. DART: noise injection for robust imitation learning[J]. arXiv:1703.09327, 2017. |