A 3-D assisted generative model for facial texture super- resolution

  • Authors:
  • Pouria Mortazavian;Josef Kittler;William Christmas

  • Affiliations:
  • Centre for Vision, Speech and Signal Processing, University of Surrey, United Kingdom;Centre for Vision, Speech and Signal Processing, University of Surrey, United Kingdom;Centre for Vision, Speech and Signal Processing, University of Surrey, United Kingdom

  • Venue:
  • BTAS'09 Proceedings of the 3rd IEEE international conference on Biometrics: Theory, applications and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes an example-based Bayesian method for 3D-assisted pose-independent facial texture super-resolution. The method utilizes a 3D morphable model to map facial texture from a 2D face image to a pose- and shape-normalized texture map and vice versa. The center piece of this method is a generative model to describe the process of forming an image from a pose- and shape-normalized texture map. The goal is to reconstruct a high-resolution texture map given an low-resolution face image. The prior knowledge about the sought high-resolution texture is incorporated into the Bayesian framework by using a recognition-based prior that encourages the gradient values of the texture map to be close to some predicted values. We develop the generative model and formulate the problem as MAP estimation. The results show that this framework is capable of performing pose-independent face recognition even when the sample set only contains exemplar face images with frontal pose. We present results in frontal and nonfrontal poses. We also demonstrate that the technique can be utilized to improve face recognition results when the probe images have a lower resolution compared to the gallery images.