[1] MALLICK A, DELPOBIL A P, CERVERA E. Deep learning based object recognition for robot picking task[C]//Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication (IMCOM2018), New York, USA, 2018: 1-9.
[2] JIANG Y, MOSESON S, SAXENA A. Efficient grasping from RGBD images: learning using a new rectangle representation[C]//IEEE International Conference on Robotics and Automation, 2011: 9-13.
[3] 张志康, 魏赟. 基于语义分割的两阶段抓取检测算法[J/OL]. 计算机集成制造系统: 1-15[2022-12-11]. http://kns.cnki.net/kcms/detail/11.5946.TP.20220517.1009.008.html.
ZHANG Z K, WEI Y. Two-stage grasp detection algorithm based on semantic segmentation network[J/OL].Computer Integrated Manufacturing Systems:1-15[2022-12-11]. http://kns.cnki.net/kcms/detail/11.5946.TP.20220517.1009.008.html.
[4] 陈丹, 林清泉. 基于级联式Faster RCNN的三维目标最优抓取方法研究[J]. 仪器仪表学报, 2019, 40(4): 229-237.
CHEN D, LIN Q Q. Research on 3D object optimal grasping method based on cascaded Faster RCNN[J]. Chinese Journal of Scientific Instrument, 2019, 40(4): 229-237.
[5] 孙先涛, 程伟, 陈文杰, 等. 基于深度学习的视觉检测及抓取方法[J]. 北京航空航天大学学报, 2023, 49(10):2635-2644.
SUN X T, CHENG W, CHEN W J, et al. A visual detection and grasping method based on deep learning[J]. Journal of Beijing University of Aeronautics and Astronautics, 2023, 49(10):2635-2644.
[6] CHU F J, XU R, PATRICIO V. Real-world multi-object, multi-grasp detection[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 3355 -3362.
[7] 夏浩宇, 索双富, 王洋, 等. 基于Keypoint RCNN改进模型的物体抓取检测算法[J]. 仪器仪表学报, 2021, 42(4): 236-246.
XIA H Y, SUO S F, WANG Y, et al. Object grasping detection algorithm based on keypoint RCNN improved model[J]. Chinese Journal of Scientific Instrument, 2021, 42(4): 236-246.
[8] JOCHER G.YOLOv5[EB/OL].(2020?06?17)[2022?08?16]. https://github.com/ultralytics/YOLOv5.
[9] YANG X, YAN J. Arbitrary-oriented object detection with circular smooth label[C]//European Conference on Computer Vision. Cham: Springer, 2020: 677-694.
[10] REDMON J, DIVVAL S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2016: 779-788.
[11] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2017: 7263-7271.
[12] REDMON J, FARHADI A. YOLOv3: an incremental improvement[EB/OL]. (2018-04-08) [2022-08-16]. https://arxiv.org/abs/1804.02767 .
[13] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[EB/OL]. (2022-07-06)[2022-08-16].https://arxiv.org/abs/2207.02696 .
[14] 熊军林, 赵铎. 基于RGB图像的二阶段机器人抓取位置检测方法[J]. 中国科学技术大学学报, 2020, 50(1): 1-10.
XIONG J L, ZHAO D. Two-stage grasping detection for robots based on RGB images[J]. Journal of University of Science and Technology of China, 2020, 50(1): 1-10.
[15] TEKIN B, SINHA S N, FUA P. Real-time seamless single shot 6d object pose prediction[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2018: 292-301.
[16] LENZ I, LEE H, SAXENA A. Deep learning for detecting robotic grasps[J]. The International Journal of Robotics Research, 2015, 34(4/5): 705-724.
[17] REDMON J, ANGELOVA A. Real-time grasp detection using convolutional neural networks[J]. arXiv:1412.3128, 2014.
[18] GE Z, LIU S, WANG F, et al. Yolox: exceeding yolo series in 2021[EB/OL]. (2021?08?16)[2022?08?16]. https://arxiv.org/abs/2107.08430.
[19] REZATOFIGHI H, TSOI N, GWAK J Y, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 658-666.
[20] XIONG R, YANG Y, HE D, et al. On layer normalization in the transformer architecture[C]//International Conference on Machine Learning, 2020: 10524-10533.
[21] LOSHCHILOV I, HUTTER F. SGDR: stochastic gradient descent with warm restarts[J]. arXiv:1608.03983, 2016.
[22] GUO D, SUN F, LIU H, et al. A hybrid deep architecture for robotic grasp detection[C]//2017 IEEE International Conference on Robotics and Automation, 2017: 1609-1614. |