Computer Engineering and Applications ›› 2022, Vol. 58 ›› Issue (15): 220-228.DOI: 10.3778/j.issn.1002-8331.2012-0409

• Graphics and Image Processing • Previous Articles     Next Articles

Video Super-Resolution Reconstruction Algorithm Based on Optical Flow Residual

WU Hao, LAI Huicheng, QIAN Xuze, CHEN Hao   

  1. 1.College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China
    2.Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi 830046, China
  • Online:2022-08-01 Published:2022-08-01

基于光流残差的视频超分辨率重建算法

吴昊,赖惠成,钱绪泽,陈豪   

  1. 1.新疆大学 信息科学与工程学院,乌鲁木齐 830046
    2.新疆大学 信号检测与处理自治区重点实验室,乌鲁木齐 830046

Abstract: With the development of convolutional neural network, video super resolution algorithm has achieved remarkable success. Because of the complex dependence between frames, traditional methods lack the ability to model the complex dependence, which makes it difficult to accurately estimate and compensate the motion in the process of video super-resolution reconstruction. A reconstruction based on optical flow residual network is put forward, adjacent video frame of the complementary information by using dense residual network in the low spatial resolution is obtained, again through the pyramid structure, the flow of high resolution video frame is predicted, and then through the subpixel convolution layer will be low resolution video frames into a high-resolution video frame, and the high resolution video frames and prediction of high resolution optical flow motion compensation. Finally, it is input into the super-resolution fusion network to get better results, and a new loss function training network is proposed, which can better constrain the network. Experimental results on open data sets show that the reconstruction effect is improved in terms of peak signal-to-noise ratio, structural similarity and subjective visual effect.

Key words: video super resolution, optical flow estimation, dense residual block

摘要: 随着卷积神经网络的发展,视频超分辨率算法取得了显著的成功。因为帧与帧之间的依赖关系比较复杂,所以传统方法缺乏对复杂的依赖关系进行建模的能力,难以对视频超分辨率重建的过程进行精确地运动估计和补偿。因此提出一个基于光流残差的重建网络,在低分辨率空间使用密集残差网络得到相邻视频帧的互补信息,通过金字塔的结构来预测高分辨率视频帧的光流,通过亚像素卷积层将低分辨率的视频帧变成高分辨率视频帧,并将高分辨率的视频帧与预测的高分辨率光流进行运动补偿,将其输入到超分辨率融合网络来得到更好的效果,提出新的损失函数训练网络,能够更好地对网络进行约束。在公开数据集上的实验结果表明,重建效果在峰值信噪比、结构相似度、主观视觉的效果上均有提升。

关键词: 视频超分辨率, 光流估计, 密集残差块