Classification-based video super-resolution using artificial neural networks

  • Authors:
  • Ming-Hui Cheng;Kao-Shing Hwang;Jyh-Horng Jeng;Nai-Wei Lin

  • Affiliations:
  • Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan;Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan and Department of Electrical Engineering, National Chung Cheng University, Chiayi 621, Taiwan;Department of Information Engineering, I-Shou University, Kaohsiung 84001, Taiwan;Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan

  • Venue:
  • Signal Processing
  • Year:
  • 2013

Quantified Score

Hi-index 0.08

Visualization

Abstract

In this study, a classification-based video super-resolution method using artificial neural network (ANN) is proposed to enhance low-resolution (LR) to high-resolution (HR) frames. The proposed method consists of four main steps: classification, motion-trace volume collection, temporal adjustment, and ANN prediction. A classifier is designed based on the edge properties of a pixel in the LR frame to identify the spatial information. To exploit the spatio-temporal information, a motion-trace volume is collected using motion estimation, which can eliminate unfathomable object motion in the LR frames. In addition, temporal lateral process is employed for volume adjustment to reduce unnecessary temporal features. Finally, ANN is applied to each class to learn the complicated spatio-temporal relationship between LR and HR frames. Simulation results show that the proposed method successfully improves both peak signal-to-noise ratio and perceptual quality.