Face reconstruction across different poses and arbitrary illumination conditions

  • Authors:
  • Sen Wang;Lei Zhang;Dimitris Samaras

  • Affiliations:
  • Department of Computer Science, SUNY at Stony Brook, NY;Department of Computer Science, SUNY at Stony Brook, NY;Department of Computer Science, SUNY at Stony Brook, NY

  • Venue:
  • AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a novel method for face reconstruction from multi-posed face images taken under arbitrary unknown illumination conditions. Previous work shows that any face image can be represented by a set of low dimensional parameters: shape parameters, spherical harmonic basis (SHB) parameters, pose parameters and illumination coefficients. Thus, face reconstruction can be performed by recovering the set of parameters from the input images. In this paper, we demonstrate that the shape and SHB parameters can be estimated by minimizing the silhouettes errors and image intensity errors in a fast and robust manner. We propose a new algorithm to detect the corresponding points between the 3D face model and the input images by using silhouettes. We also apply a model-based bundle adjustment technique to perform this minimization. We provide a series of experiments on both synthetic and real data and experimental results show that our method can have an accurate face shape and texture reconstruction.