[1] HUA X H, LI H X, ZENG J B, et al. A review of target recognition technology for fruit picking robots: from digital image processing to deep learning[J]. Applied Sciences, 2023, 13(7): 4160.
[2] FU B W, LEONG S K, LIAN X C, et al. 6D robotic assembly based on RGB-only object pose estimation[C]//Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2022: 4736-4742.
[3] BASIRI M, PEREIRA J, BETTENCOURT R, et al. Functionalities, benchmarking system and performance evaluation for a domestic service robot: people perception, people following, and pick and placing[J]. Applied Sciences, 2022, 12(10): 4819.
[4] 陈春朝, 孙东红. 基于YOLOv5的角度优化抓取检测算法研究[J]. 计算机工程与应用, 2024, 60(6): 172-179.
CHEN C C, SUN D H. Research on angle-optimised grasp detection algorithm based on YOLOv5[J]. Computer Engineering and Applications, 2024, 60(6): 172-179.
[5] BEGUIEL BERGOR B, HADJ BARAKA I, ZARDOUA Y, et al. Recent developments in robotic grasping detection[C]//Proceedings of the International Conference on Advanced Intelligent Systems for Sustainable Development. Cham: Springer, 2024: 35-44.
[6] JALAL A, SARWAR M Z, KIM K. RGB-D images for objects recognition using 3D point clouds and RANSAC plane fitting[C]//Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies. Piscataway: IEEE, 2021: 518-523.
[7] TIAN H K, SONG K C, LI S, et al. Rotation adaptive grasping estimation network oriented to unknown objects based on novel RGB-D fusion strategy[J]. Engineering Applications of Artificial Intelligence, 2023, 120: 105842.
[8] TEN PAS A, GUALTIERI M, SAENKO K, et al. Grasp pose detection in point clouds[J]. The International Journal of Robotics Research, 2017, 36(13/14): 1455-1473.
[9] SHAO L, FERREIRA F, JORDA M, et al. UniGrasp: learning a unified model to grasp with multifingered robotic hands[J]. IEEE Robotics and Automation Letters, 2020, 5(2): 2286-2293.
[10] ZENG A, YU K T, SONG S R, et al. Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge[C]//Proceedings of the 2017 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2017: 1386-1383.
[11] ZHOU Y, TUZEL O. VoxelNet: end-to-end learning for point cloud based 3D object detection[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4490-4499.
[12] FANG H S, WANG C X, GOU M H, et al. GraspNet-1Billion: a large-scale benchmark for general object grasping[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 11441-11450.
[13] SUNDERMEYER M, MOUSAVIAN A, TRIEBEL R, et al. Contact-GraspNet: efficient 6-DoF grasp generation in cluttered scenes[C]//Proceedings of the 2021 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2021: 13438-13444.
[14] ZHAI G Y, HUANG D Y, WU S C, et al. MonoGraspNet: 6-DoF grasping with a single RGB image[C]//Proceedings of the 2023 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2023: 1708-1714.
[15] 肖贤鹏, 胡莉, 张静, 等. 基于多尺度特征融合的抓取位姿估计[J]. 计算机工程与应用, 2022, 58(10): 172-177.
XIAO X P, HU L, ZHANG J, et al. Grasp pose estimation based on multi-scale feature fusion[J]. Computer Engineering and Applications, 2022, 58(10): 172-177.
[16] DEPIERRE A, DELLANDRéA E, CHEN L M. Jacquard: a large scale dataset for robotic grasp detection[C]//Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2018: 3511-3516.
[17] CALLI B, SINGH A, BRUCE J, et al. Yale-CMU-Berkeley dataset for robotic manipulation research[J]. The International Journal of Robotics Research, 2017, 36(3): 261-268.
[18] 杨高朝. 基于特征提取的点云自动配准优化研究[J]. 计算机工程与应用, 2018, 54(16): 163-168.
YANG G Z. Research on automatic registration of point clouds based on feature extraction[J]. Computer Engineering and Applications, 2018, 54(16): 163-168.
[19] LIANG H Z, MA X J, LI S, et al. PointNetGPD: detecting grasp configurations from point sets[C]//Proceedings of the 2019 International Conference on Robotics and Automation. Piscataway: IEEE, 2019: 3629-3635.
[20] CAI J H, CEN J, WANG H K, et al. Real-time collision-free grasp pose detection with geometry-aware refinement using high-resolution volume[J]. IEEE Robotics and Automation Letters, 2022, 7(2): 1888-1895.
[21] DE OLIVEIRA D M, CONCEICAO A G S. A fast 6DOF visual selective grasping system using point clouds[J]. Machines, 2023, 11(5): 540.
[22] HANG K Y, LI M, STORK J A, et al. Hierarchical fingertip space: a unified framework for grasp planning and in-hand grasp adaptation[J]. IEEE Transactions on Robotics, 2016, 32(4): 960-972.
[23] MARLIER N, BRüLS O, LOUPE G. Simulation-based Bayesian inference for robotic grasping[J]. arXiv:2303.05873, 2023.
[24] TOBIN J, BIEWALD L, DUAN R, et al. Domain randomi-
zation and generative models for robotic grasping[C]//Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2018: 3482-3489.
[25] PELLICANO A, BINKOFSKI F. The prominent role of perceptual salience in object discrimination: overt discrimination of graspable side does not activate grasping affordances[J]. Psychological Research, 2021, 85(3): 1234-1247.
[26] NGUYEN V D. Constructing force-closure grasps[J]. The International Journal of Robotics Research, 1988, 7(3): 3-16.
[27] HELOU E S, ZIBETTI M V W, AXEL L, et al. The discrete Fourier transform for golden angle linogram sampling[J]. Inverse Problems, 2019, 35(12): 125004.
[28] GORJUP G, GEREZ L, LIAROKAPIS M. Leveraging human perception in robot grasping and manipulation through crowdsourcing and gamification[J]. Frontiers in Robotics and AI, 2021, 8: 652760.
[29] SHAN X X, SHEN Y T, CAI H B, et al. Convolutional neural network optimization via channel reassessment attention module[J]. Digital Signal Processing, 2022, 123: 103408.
[30] YANG L, ZHANG R Y, LI L, et al. SimAM: a simple, parameter-free attention module for convolutional neural networks[C]//Proceedings of the International Conference on Machine Learning, 2021: 11863-11874.
[31] CHEN Z B, LIU Z X, XIE S J, et al. Grasp region exploration for 7-DoF robotic grasping in cluttered scenes[C]//Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2023: 3169-3175.
[32] LI Y M, KONG T, CHU R H, et al. Simultaneous semantic and collision learning for 6-DoF grasp pose estimation[C]//Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2021: 3571-3578.
[33] 高翔, 谢海晟, 朱博, 等. 基于多尺度特征融合和抓取质量评估的抓取生成方法[J]. 仪器仪表学报, 2023, 44(7): 101-111.
GAO X, XIE H S, ZHU B, et al. Grasp generation method based on multiscale features fusion and grasp quality assessment[J]. Chinese Journal of Scientific Instrument, 2023, 44(7): 101-111.
[34] WANG C X, FANG H S, GOU M H, et al. Graspness discovery in clutters for fast and accurate grasp detection[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 15944-15953.
[35] MA H X, HUANG D. Towards scale balanced 6-DoF grasp detection in cluttered scenes[C]//Proceedings of the Conference on Robot Learning, 2022
[36] QIU J N, WANG F, DANG Z. Multi-source fusion for voxel-based 7-DoF grasping pose estimation[C]//Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2023: 968-975. |