Computer Engineering and Applications ›› 2020, Vol. 56 ›› Issue (24): 181-187.DOI: 10.3778/j.issn.1002-8331.1912-0328

Previous Articles     Next Articles

Robot Object Detection and Localization Based on Deep Learning

HUANG Yimeng, YI Yang   

  1. College of Electrical Engineering and Control Science, University of Nanjing Tech, Nanjing 211816, China
  • Online:2020-12-15 Published:2020-12-15



  1. 南京工业大学 电气工程与控制科学学院,南京 211816


Tiny-YOLOV3 is a common detection algorithm in the field of object detection. Compared with YOLOV3, tiny has the advantages of simpler neural network layer, less computation and lower hardware configuration requirements, so it can ensure the real-time detection. However, due to the lack of network layer, the accuracy of tiny detection is also low. In order to improve the detection accuracy of Tiny-YOLOV3, this paper improves the network structure of Tiny-YOLOV3 by adjusting the loss structure layer of the detection network architecture, and uses the correlation coefficient matrix of the convolution layer and the characteristic graph to represent the distribution of the characteristic graph. In addition, this paper also designs the loss function to optimize the distribution of the loss feature layer and enhances the expression ability of the network features. Finally, combining with the NAO robot, the object detection position of the image is transformed into the robot coordinate system position through the trigonometric function positioning. Based on 4,000 self-made image data set training and testing, this paper designs object detection and grab experiment on the NAO robot under various scene. For 50 times experiments with a detection speed at 35 frames per second(same as the original), the optimized model improves the map value by 4.08%, the confidence level by 20%, and the touch success rate by 10%. The results show that it meets the requirement of real time object detection and improves the detection accuracy at the same time, which will help a lot in real-time application scenarios such as robot sorting, picking, monitoring, service, etc.

Key words: object detection, robot, network structure, Tiny-YOLOV3, loss function


Tiny-YOLOV3是目标检测领域常用的检测算法,相比较YOLOV3,其优点是神经网络层比较简单,计算量少,且对硬件的配置要求较低,因此可以保证检测的实时性,但由于网络层比较少,检测的精度也较低。为了提高Tiny-YOLOV3在网络中的检测精度,提出一类Tiny-YOLOV3改进模型,调整检测网络架构的损失结构层,以卷积层和特征图的相关系数矩阵表征特征图分布,设计损失函数优化损失特征层分布,增强网络特征的表达能力。结合NAO机器人平台,采用三角函数定位将基于图像的目标检测位置转换为机器人坐标系位置。根据4 000张VOC数据格式自制数据集进行模型训练与测试,针对不同物体在变化位置下进行50次机器人手臂抓取实验。相比原始Tiny-YOLOV3模型,改进的网络模型在分辨率为640×480单张图片的检测速度35 帧/s前提下,检测mAP值提高了4.08%,置信度提高20%。实验结果表明算法在兼顾目标检测时间效率的前提下有效提高了目标检测准确度,可满足机器人在分拣、采摘、监控、服务等多样实时性应用场景需求。

关键词: 目标检测, 机器人, 网络结构, Tiny-YOLOV3, 损失函数