Three-dimensional face recognition across pose and expression

  • Authors:
  • Anil K. Jain;Xiaoguang Lu

  • Affiliations:
  • Michigan State University;Michigan State University

  • Venue:
  • Three-dimensional face recognition across pose and expression
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Face analysis and recognition has a large number of applications, such as security, communication, and entertainment. Current two-dimensional image based face recognition systems encounter difficulties with large facial appearance variations due to pose, illumination, and expression changes. We have developed a face recognition system that utilizes three-dimensional shape information to make the system more robust to large head pose changes. Two different modalities provided by a facial scar; namely, shape and intensity, are utilized and integrated for face matching. While the 3D shape of a face does not change due to head pose (rigid) and lighting changes, it is not invariant to non-rigid facial movement, such as expressions. Collecting and storing multiple templates to account for various expressions for each subject in a, large database is not practical. We have designed a hierarchical geodesic-based resampling scheme to derive a facial surface representation for establishing correspondence across expressions and subjects. Based on the developed representation, we extract and model three-dimensional non-rigid facial deformations such as expression changes for expression transfer and synthesis. For 3D face matching purposes, a userspecific 3D deformable model is built driven by facial expressions. An alternating optimization scheme is applied to fit the deformable model to a test facial scan, resulting in a matching distance. To make the matching system fully automatic, an automatic facial feature point extractor was developed. The resulting 3D recognition system is able to handle large head pose changes and expressions simultaneously. In summary, a fully automatic system has been developed to address the problems of 3D face matching in the presence of simultaneous large pose changes and expression variations, including automatic feature extraction, integration of two modalities, and deformation analysis to handle non-rigid facial movement (e.g., expressions).