[1] CHENG H, WANG Y Y, MENG M Q H. A vision-based robot grasping system[J]. IEEE Sensors Journal, 2022, 22(10): 9610-9620.
[2] LU Q K, VAN DER MERWE M, SUNDARALINGAM B, et al. Multifingered grasp planning via inference in deep neural networks: outperforming sampling by learning differentiable models[J]. IEEE Robotics & Automation Magazine, 2020, 27(2): 55-65.
[3] MCGINN C, CULLINAN M, HOLLAND D, et al. Towards the design of a new humanoid robot for domestic applications[C]//Proceedings of the 2014 IEEE International Conference on Technologies for Practical Robot Applications. Piscataway: IEEE, 2014: 1-6.
[4] DU G G, WANG K, LIAN S G, et al. Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review[J]. Artificial Intelligence Review, 2021, 54(3): 1677-1734.
[5] VOHRA M, PRAKASH R, BEHERA L. Real-time grasp pose estimation for novel objects in densely cluttered environment[C]//Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication. Piscataway: IEEE, 2020: 1-6.
[6] CAI J H, CHENG H, ZHANG Z P, et al. MetaGrasp: data efficient grasping by affordance interpreter network[C]//Proceedings of the 2019 International Conference on Robotics and Automation. Piscataway: IEEE, 2019: 4960-4966.
[7] 阮国强, 曹雏清. 基于PointNet++的机器人抓取姿态估计[J]. 仪表技术与传感器, 2023(5): 44-48.
RUAN G Q, CAO C Q. Robot grasping attitude estimation based on PointNet++[J]. Instrument Technique and Sensor, 2023(5): 44-48.
[8] PATTEN T, PARK K, VINCZE M. DGCM-net: dense geometrical correspondence matching network for incremental experience-based robotic grasping[J]. Frontiers in Robotics and AI, 2020, 7: 120.
[9] WANG C, XU D F, ZHU Y K, et al. DenseFusion: 6D object pose estimation by iterative dense fusion[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 3338-3347.
[10] 刘亚军, 訾斌, 王正雨, 等. 智能喷涂机器人关键技术研究现状及进展[J]. 机械工程学报, 2022, 58(7): 53-74.
LIU Y J, ZI B, WANG Z Y, et al. Research progress and trend of key technology of intelligent spraying robot[J]. Journal of Mechanical Engineering, 2022, 58(7): 53-74.
[11] GUEPFRIH M F, WALTRICH G, LAZZARIN T B. Unidirectional step-up DC-DC converter based on interleaved phases, coupled inductors, built-in transformer, and voltage multiplier cells[J]. IEEE Transactions on Industrial Electronics, 2023, 70(3): 2385-2395.
[12] 朱凯, 李理, 张彤, 等. 视觉Transformer在低级视觉领域的研究综述[J]. 计算机工程与应用, 2024, 60(4): 39-56.
ZHU K, LI L, ZHANG T, et al. Survey of vision transformer in low-level computer vision[J]. Computer Engineering and Applications, 2024, 60(4): 39-56.
[13] 孙刘杰, 赵进, 王文举, 等. 多尺度Transformer激光雷达点云3D物体检测[J]. 计算机工程与应用, 2022, 58(8): 136-146.
SUN L J, ZHAO J, WANG W J, et al. Multi-scale transformer lidar point cloud 3D object detection[J]. Computer Engineering and Applications, 2022, 58(8): 136-146.
[14] 杜佳锦, 柏正尧, 刘旭珩, 等. 融合几何注意力和多尺度特征点云配准网络[J]. 计算机工程与应用, 2024, 60(12): 234-244.
DU J J, BAI Z Y, LIU X H, et al. Fusion of geometric attention and multi-scale feature network for point cloud registration[J]. Computer Engineering and Applications, 2024, 60(12): 234-244.
[15] DONG M S, BAI Y X, WEI S M, et al. Robotic grasp detection based on transformer[M]//Intelligent robotics and applications. Cham: Springer, 2022: 437-448.
[16] NI P Y, ZHANG W G, ZHU X X, et al. PointNet++ grasping: learning an end-to-end spatial grasp generation algorithm from sparse point clouds[C]//Proceedings of the 2020 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2020: 3619-3625.
[17] ELDAR Y, LINDENBAUM M, PORAT M, et al. The farthest point strategy for progressive image sampling[J]. IEEE Transactions on Image Processing, 1997, 6(9): 1305-1315.
[18] FANG H S, WANG C X, GOU M H, et al. GraspNet-1Billion: a large-scale benchmark for general object grasping[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 11441-11450.
[19] QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]//Advances in Neural Information Processing Systems 30, 2017.
[20] CUI B Y, LI Y M, CHEN M, et al. Fine-tune BERT with sparse self-attention mechanism[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 3546-3551.
[21] YU Y R, SHI L, RAN Z H, et al. Research on power line extrac-tion and modeling technology based on laser point cloud[J]. Journal of Physics: Conference Series, 2023, 2503(1): 012044.
[22] WANG P S. OctFormer: octree-based transformers for 3D point clouds[J]. ACM Transactions on Graphics, 2023, 42(4): 1-11.
[23] LI Y J, CAI J T. Point cloud classification network based on self-attention mechanism[J]. Computers and Electrical Engineering, 2022, 104: 108451.
[24] WU X Y, LAO Y X, JIANG L, et al. Point transformer V2: grouped vector attention and partition-based pooling[C]//Advances in Neural Information Processing Systems 35, 2022: 33330-33342.
[25] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444.
[26] MA H X, HUANG D. Towards scale balanced 6-DoF grasp detection in cluttered scenes[C]//Proceedings of the 6th Con-ference on Robot Learning, 2023: 2004-2013.
[27] WANG C X, FANG H S, GOU M H, et al. Graspness discovery in clutters for fast and accurate grasp detection[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2022: 15944-15953.
[28] MORRISON D, LEITNER J, CORKE P. Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach[C]//Robotics: Science and Systems XIV, 2018. DOI:10.15607/rss.2018.xiv.021.
[29] CHU F J, XU R N, VELA P A. Real-world multiobject, multi-grasp detection[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 3355-3362.
[30] TEN P A, GUALTIERI M, SAENKO K, et al. Grasp pose detection in point clouds[J]. The International Journal of Rob-otics Research, 2017, 36(13/14): 1455-1473.
[31] LIANG H Z, MA X J, LI S, et al. PointNetGPD: detecting grasp configurations from point sets[C]//Proceedings of the 2019 International Conference on Robotics and Automation. Piscataway: IEEE, 2019: 3629-3635.
[32] GOU M H, FANG H S, ZHU Z D, et al. RGB matters: learning 7-DoF grasp poses on monocular RGBD images[C]//Proceedings of the 2021 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2021: 13459-13466.
[33] NAKAMACHI E, UETSUJI Y, KURAMAE H, et al. Process crystallographic simulation for biocompatible piezoelectric material design and generation[J]. Archives of Computational Methods in Engineering, 2013, 20(2): 155-183.
[34] ZHU X P, WANG D, BIZA O, et al. Sample efficient grasp learning using equivariant models[C]//Robotics: Science and Systems XVIII, 2022. DOI:10.15607/rss.2022.xviii.071. |