Computer Engineering and Applications ›› 2021, Vol. 57 ›› Issue (9): 140-147.DOI: 10.3778/j.issn.1002-8331.2001-0346

Previous Articles     Next Articles

Research on Visual Tracking Algorithm and Application of Target Loss Discrimination Mechanism

MOU Qingping, ZHANG Ying, ZHANG Dongbo, WANG Xinjie, YANG Zhiqiao   

  1. 1.College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan 411105, China
    2.National Engineering Laboratory for Robotic Visual Perception and Control Technology, Changsha 410082, China
  • Online:2021-05-01 Published:2021-04-29

目标丢失判别机制的视觉跟踪算法及应用研究

牟清萍,张莹,张东波,王新杰,杨知桥   

  1. 1.湘潭大学 自动化与电子信息学院,湖南 湘潭 411105
    2.机器人视觉感知与控制技术国家工程实验室,长沙 410082

Abstract:

When the tracked target is severely occluded or leaves the field of camera’s view, the robot tracking target is often lost. In order to track the target robustly, a target loss discrimination tracking algorithm named YOLO-RTM is proposed. This method detects the target in the first frame of video through YOLOv3. Then the Real-Time Multi-Domain convolutional neural Network(Real-Time MDNet, RT-MDNet) tracking algorithm is used to predict the change of the target frame by frame. Finally, the Intersection over Union(IoU) is calculated, and the updating mode of the model is determined according to the comparison result of the IoU and the preset threshold value. If the IoU is higher than the threshold value, RT-MDNet is used to update the appearance model. If the IoU is lower than the threshold value, Yolov3 is used to search the target again and update the appearance model. The experimental results on the Turtlebot2 robot show that the reliability of mobile robot tracking in general scenes is satisfied by the proposed algorithm, and the practicability of tracking algorithm is improved effectively.

Key words: visual tracking, target loss discrimination mechanism, real-time multi-domain convolutional neural networks, Intersection over Union(IoU), leaves the field of camera’s view

摘要:

当跟踪对象被严重遮挡或者离开相机视野范围时,机器人的跟踪目标往往会丢失。为了实现准确跟踪,提出了目标丢失判别跟踪YOLO-RTM算法。该方法通过YOLOv3检测视频第一帧中的目标。利用实时多域卷积神经网络(Real-Time MDNet,RT-MDNet)跟踪算法预测目标边界框的变化。计算重叠度,根据重叠度与预设阈值的比较结果决定模型更新方式,当重叠度高于阈值时,采用RT-MDNet更新外观模型,当重叠度低于阈值时,采用YOLOv3重新搜索目标并更新外观模型。在Turtlebot2机器人上的实验结果表明,提出的算法能满足移动机器人跟踪的可靠性,且有效提高跟踪算法的实用性。

关键词: 视觉跟踪, 目标丢失判别机制, 实时多域卷积神经网络, 重叠度, 出镜头