Video super-resolution with 3D adaptive normalized convolution

  • Authors:
  • Kaibing Zhang;Guangwu Mu;Yuan Yuan;Xinbo Gao;Dacheng Tao

  • Affiliations:
  • School of Electronic Engineering, Xidian University, Xi'an 710071, China;School of Electronic Engineering, Xidian University, Xi'an 710071, China;Center for OPTical IMagery Analysis and Learning (OPTIMAL), State Key Laboratory of Transient Optics and Photonics, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, ...;School of Electronic Engineering, Xidian University, Xi'an 710071, China;Centre for Quantum Computation & Intelligent Systems and the Faculty of Engineering & Information Technology, University of Technology, Sydney, NSW 2007, Australia

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

The classic multi-image-based super-resolution (SR) methods typically take global motion pattern to produce one or multiple high-resolution (HR) versions from a set of low-resolution (LR) images. However, due to the influence of aliasing and noise, it is difficult to obtain highly accurate registration with sub-pixel accuracy. Moreover, in practical applications, the global motion pattern is rarely found in the real LR inputs. In this paper, to surmount or at least reduce the aforementioned problems, we develop a novel SR framework for video sequence by extending the traditional 2-dimentional (2D) normalized convolution (NC) to 3-dimentional (3D) case. In the proposed framework, to bypass explicit motion estimation, we estimate a target pixel by taking a weighted average of pixels from its neighborhood. We further up-scale the input video sequence in temporal dimension based on the extended 3D NC and hence more video frames can be generated. Fundamental experiments demonstrate the effectiveness of the proposed SR framework both quantitatively and perceptually.