An Integrated Approach to 3D Face Model Reconstruction from Video

  • Authors:
  • Chia-Ming Cheng;Shang-Hong Lai

  • Affiliations:
  • -;-

  • Venue:
  • RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Abstract: In this paper, we propose an integrated system to reconstruct 3D face models from monocular image sequences. Our approach is to adapt a generic 3D face model based on the sparse 3D geometric constraints recovered from a video sequence. The proposed face reconstruction system consists of face feature extraction/tracking, face pose estimation, structure from motion, structure from silhouette, model adaptation, and texture mapping. In the first step of our system, some face feature points in some representative frames in the image sequence are selected. Then these feature points are tracked for the entire sequence. An approximate face pose for each image frame is obtained by using an iterative pose estimation method with feature correspondences between each image and the 3D generic model. Then we use a robust bundle adjustment algorithm to recover 3D structure of a set of face feature points from the image sequence. The structure from silhouette algorithm is proposed to find the displacement vectors of the points on the 3D generic model corresponding to the sampled points on the silhouettes. Combining the 3D geometric constraints recovered from the bundle adjustment and structure from silhouette, we deform the 3D generic head model by using radial basis function interpolation to fit these geometric constraints, thus reconstructing a personalized 3D head model. After the 3D head model is reconstructed, we compute its texture map by integrating all the images in the sequence with appropriate weighting. Finally, experimental results of 3D face model recovery from the proposed system are shown in this paper.