计算机工程与应用 ›› 2020, Vol. 56 ›› Issue (8): 205-214.DOI: 10.3778/j.issn.1002-8331.1904-0168

• 图形图像处理 • 上一篇    下一篇

基于背景约束与卷积特征的目标跟踪方法

王思奎,刘云鹏,亓琳,张钟毓,林智远   

  1. 1.中国科学院 沈阳自动化研究所,沈阳 110016
    2.中国科学院 机器人与智能制造创新研究院,沈阳 110016
    3.中国科学院大学,北京 100049
    4.中国科学院 光电信息处理重点实验室,沈阳 110016
  • 出版日期:2020-04-15 发布日期:2020-04-14

Object Tracking Method Based on Background Constraints and Convolutional Features

WANG Sikui, LIU Yunpeng, QI Lin, ZHANG Zhongyu, LIN Zhiyuan   

  1. 1.Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
    2.Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China
    3.University of Chinese Academy of Sciences, Beijing 100049, China
    4.Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110016, China
  • Online:2020-04-15 Published:2020-04-14

摘要:

针对目标跟踪中因背景混叠和遮挡等因素导致的目标丢失问题,提出了一种基于背景约束与卷积特征的目标跟踪方法(TBCCF)。对输入图像进行多特征融合并降维,增强目标特征判别性能的同时降低特征计算的复杂度;在滤波器训练过程中引入背景约束,使得滤波器更专注于目标响应,以提升抗干扰能力;通过设置记忆滤波器与峰值旁瓣比检测,判断目标是否丢失。若丢失,引入卷积特征滤波器进行重检测,实现目标的重捕获。在Visual Tracking Benchmark数据集50个复杂场景视频序列上的实验结果表明,所提算法总体精度和总体成功率优于现有的多数跟踪算法。

关键词: 目标跟踪, 多特征融合, 背景约束, 记忆滤波器, 卷积特征

Abstract:

An object tracking method, which is based on background constraints and convolutional features(TBCCF), is proposed to solve the target loss problem caused by background aliasing and occlusion in object tracking. Firstly, the feature of input image is fused and dimensionally reduced to enhance the performance of target feature discrimination and reduce the complexity of feature computation. Secondly, background constraints are introduced into the filter training process, which makes the filter more focused on the target response to improve the anti-jamming ability. Finally, by setting memory filter and the peak to sidelobe ratio detection, the tracker can judge whether the target is missing or not. If the target is lost, a convolutional features filter is introduced to re-detect the target. The experimental results of 50 complex scene video sequences on Visual Tracking Benchmark datasets show that the proposed algorithm has better overall accuracy and overall success rate than most existing tracking algorithms.

Key words: object tracking, multi-feature fusion, background constraint, memory filter, convolutional features