Interpolating fine texturess with fields of experts prior

  • Authors:
  • Kai Guo;Xiaokang Yang;Rui Zhang;Songyu Yu;Hongyuan Zha

  • Affiliations:
  • Institute of Image Communication and Information Processing, Shanghai Key lab of Digital Media Processing and Transmission, Shanghai Jiao Tong University, Shanghai, China;Institute of Image Communication and Information Processing, Shanghai Key lab of Digital Media Processing and Transmission, Shanghai Jiao Tong University, Shanghai, China;Institute of Image Communication and Information Processing, Shanghai Key lab of Digital Media Processing and Transmission, Shanghai Jiao Tong University, Shanghai, China;Institute of Image Communication and Information Processing, Shanghai Key lab of Digital Media Processing and Transmission, Shanghai Jiao Tong University, Shanghai, China;College of Computing, Georgia Institute of Technology, Atlanta, Georgia

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional image interpolation methods assume that the local spatial structure of the low-resolution (LR) and high-resolution (HR) images are approximately the same, and use edge information of the LR image to estimate the missing pixels. This assumption, however, no longer holds for natural images with fine and dense textures. Consequently, those methods cannot restore dense textures well and tend to generate over-fitting visual effects. In this paper, a learned HR image prior is exploited to overcome the problems. In particular, we use Fields of Experts (FoE) with student's t-distribution experts to model the prior, taking advantage of its representative ability of non-Gaussian natures in images. Then Maximum a Posterior (MAP) estimation incorporating FoE prior is used to estimate the missing pixels. Experimental results compared with traditional interpolation methods demonstrate that our method not only can recover fine details and produce superior PSNR values, but also avoid the visual over-fitting problems.