Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis

  • Authors:
  • Yanhui Huang;Xing Zhang;Yangyu Fan;Lijun Yin;Lee Seversky;James Allen;Tao Lei;Weijun Dong

  • Affiliations:
  • Northwestern Polytechnical University, China;State University of New York at Binghamton, USA;Northwestern Polytechnical University, China;State University of New York at Binghamton, USA;State University of New York at Binghamton, USA and Air Force Research Lab at Rome, NY, USA;State University of New York at Binghamton, USA;Northwestern Polytechnical University, China;Northwest University, China

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

3D face scans have been widely used for face modeling and analysis. Due to the fact that face scans provide variable point clouds across frames, they may not capture complete facial data or miss point-to-point correspondences across various facial scans, thus causing difficulties to use such data for analysis. This paper presents an efficient approach to representing facial shapes from face scans through the reconstruction of face models based on regional information and a generic model. A new approach for 3D feature detection and a hybrid approach using two vertex mapping algorithms, displacement mapping and point-to-surface mapping, and a regional blending algorithm are proposed to reconstruct the facial surface detail. The resulting models can represent individual facial shapes consistently and adaptively, establishing facial point correspondences across individual models. The accuracy of the generated models is evaluated quantitatively. The applicability of the models is validated through the application of 3D facial expression recognition using the static 3DFE and dynamic 4DFE databases. A comparison with the state of the art has also been reported.