Facial occlusion reconstruction: recovering both the global structure and the local detailed texture components

  • Authors:
  • Ching-Ting Tu;Jenn-Jier James Lien

  • Affiliations:
  • Robotics Laboratory, Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan, R.O.C.;Robotics Laboratory, Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan, R.O.C.

  • Venue:
  • PSIVT'07 Proceedings of the 2nd Pacific Rim conference on Advances in image and video technology
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

An automatic facial occlusion reconstruction system based upon a novel learning algorithm called the direct combined model (DCM) approach is presented. The system comprises two basic DCM modules, namely a shape reconstruction module and a texture reconstruction module. Each module models the occluded and non-occluded regions of the facial image in a single, combined eigenspace, thus preserving the correlations between the geometry of the facial features and the pixel grayvalues, respectively, in the two regions. As a result, when shape or texture information is available only for the nonoccluded region of the facial image, the optimal shape and texture of the occluded region can be reconstructed via a process of Bayesian inference within the respective eigenspaces. To enhance the quality of the reconstructed results, the shape reconstruction module is rendered robust to facial feature point labeling errors by suppressing the effects of biased noises. Furthermore, the texture reconstruction module recovers the texture of the occluded facial image by synthesizing the global texture image and the local detailed texture image. The experimental results demonstrate that compared to existing facial reconstruction systems, the reconstruction results obtained using the proposed DCM-based scheme are quantitatively closer to the ground truth.