[1] ECHELMEYER W, KIRCHHEIM A, WELLBROCK E. Robotics-logistics: challenges for automation of logistic processes[C]//Proceedings of the 2008 IEEE International Conference on Automation & Logistics, 2008.
[2] CACCAVALE R, ARPENTI P, PADUANO G, et al. A flexible robotic depalletizing system for supermarket logistics[J]. IEEE Robotics and Automation Letters, 2020, 5(3): 4471-4476.
[3] ARPENTI P, CACCAVALE R, PADUANO G, et al. RGB-D recognition and localization of cases for robotic depalletizing in supermarkets[J]. IEEE Robotics and Automation Letters Letters, 2020, 5(4): 6233-6238.
[4] KRUG R, STOYANOV T, TINCANI V, et al. The next step in robot commissioning: autonomous picking & palletizing[J]. IEEE Robotics and Automation Letters, 2016, 1(1): 546-553.
[5] 倪鹤鹏, 刘亚男, 张承瑞, 等.基于机器视觉的Delta机器人分拣系统算法[J].机器人, 2016, 38(1): 49-55.
NI H P, LIU Y N, ZHANG C R, et al. Sorting system algorithms based on machine vision for Delta robot[J]. Robot, 2016, 38(1): 49-55.
[6] MAITIN-SHEPARD J, CUSUMANO-TOWNER M, LEI J, et al. Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding[C]//Proceedings of the 2010 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2010: 2308-2315.
[7] RAMISA A, ALENYA G, MORENO-NOGUER F, et al. Using depth and appearance features for informed robot grasping of highly wrinkled clothes[C]//Proceedings of the 2012 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2012: 1703-1708.
[8] JIANG Y, MOSESON S, SAXENA A. Efficient grasping from RGBD images: learning using a new rectangle representation[C]//Proceedings of the 2011 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2011: 3304-3311.
[9] KATSOULAS D, KOSMOPOULOS D I. An efficient depalletizing system based on 2D range imagery[C]//Proceedings of the 2001 IEEE International Conference on Robotics & Automation, 2001.
[10] HOLZ D, TOPALIDOU-KYNIAZOPOULOU A, STüCKLER J, et al. Real-time object detection, localization and verification for fast robotic depalletizing[C]//Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2015.
[11] NAKAMOTO H, ETO H, SONOURA T, et al. High-speed and compact depalletizing robot capable of handling packages stacked complicatedly[C]//Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016.
[12] JOCAS M, KURREK P, ZOGHLAMI F, et al. AI-based learning approach with consideration of safety criteria on example of a depalletization robot[C]//Proceedings of the Design Society: International Conference on Engineering Design, 2019: 2041-2050.
[13] FAN X, LIU X, WANG X, et al. An automatic robot unstacking system based on binocular stereo vision[C]//Proceedings of the 2014 International Conference on Security, 2014.
[14] 韩鑫, 余永维, 杜柳青.基于改进单次多框检测算法的机器人抓取系统[J].计算机应用, 2020, 40(8): 2434-2440.
HAN X, YU Y W, DU L Q. Robotic grasping system based on improved single shot multibox detector algorithm[J]. Journal of Computer Applications, 2020, 40(8): 2434-2440.
[15] 杜学丹, 蔡莹皓, 鲁涛, 等. 一种基于深度学习的机械臂抓取方法[J].机器人, 2017, 39(6): 820-828.
DU X D, CAI Y H, LU T, et al. A robotic grasping method based on deep learning[J].Robot, 2017, 39(6): 820-828.
[16] 韩兴, 刘晓平, 王刚, 等.基于深度神经网络复杂场景下的机器人拣选方法[J].北京邮电大学学报, 2019, 42(5): 22-28.
HAN X, LIU X P, WANG G, et al. Robotic sorting method in complex scene based on deep neural network[J]. Journal of Beijing University of Posts and Telecommunications, 2019, 42(5): 22-28.
[17] 郑凯, 方春, 袁思邈, 等. Mask R-CNN模型在茄花花期识别中的应用研究[J].计算机工程与应用, 2022, 58(18): 318-326.
ZHENG K, FANG C, YUAN S M, et al. Application of Mask R-CNN model in identification of eggplant flowering period[J]. Computer Engineering and Applications, 2022, 58(18): 318-326.
[18] 王德明, 颜熠, 周光亮, 等.基于实例分割网络与迭代优化方法的3D视觉分拣系统[J].机器人, 2019, 41(5): 637-648.
WANG D M, YAN Y, ZHOU G L, et al. 3D vision-based picking system with instance segmentation network and iterative optimization method[J].Robot, 2019, 41(5): 637-648.
[19] GU W, BAI S, KONG L. A review on 2D instance segmentation based on deep neural networks[J]. Image and Vision Computing, 2022: 104401.
[20] HARIHARAN B, ARBELáEZ P, GIRSHICK R, et al. Simultaneous detection and segmentation[C]//Proceedings of the 13th European Conference on Computer Vision, 2014: 297-312.
[21] FONTANA E, ZAROTTI W, RIZZINI D L. A comparative assessment of parcel box detection algorithms for industrial applications[C]//Proceedings of the 2021 European Conference on Mobile Robots, 2021.
[22] HE K M, GKIOXARI G, DOLLAR P, et al. Mask R-CNN[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2980-2988.
[23] JADERBERG M, SIMONYAN K, ZISSERMAN A. Spatial transformer networks[C]//Proceedings of the 29th Annual Conference on Neural Information Processing Systems, 2015: 2017-2025.
[24] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
[25] 杨尚昆, 王岩松, 郭辉, 等.基于一阶径向畸变算法的双目摄像机多位姿标定方法[J].计算机应用, 2018, 38(9): 2655-2659.
YANG S K, WANG Y S, GUO H, et al. Binocular camera multi-pose calibration method based on radial alignment constraint algorithm[J]. Journal of Computer Applications, 2018, 38(9): 2655-2659.
[26] 杨广林, 孔令富, 王洁.一种新的机器人手眼关系标定方法[J].机器人, 2006, 28(4): 400-405.
YANG G L, KONG L F, WANG J. A new calibration approach to hand-eye relation of manipulator[J]. Robot, 2006, 28(4): 400-405.
[27] GIRSHICK R. Fast R-CNN[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision, 2015: 1440-1448. |