Deformation Modeling for Robust 3D Face Matching

  • Authors:
  • Xiaoguang Lu;Anil K. Jain

  • Affiliations:
  • Michigan State University;Michigan State University

  • Venue:
  • CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human face recognition based on 3D surface matching is promising for overcoming the limitations of current 2D image-based face recognition systems. The 3D shape is invariant to the pose and lighting changes, but not invariant to the non-rigid facial movement, such as expressions. Collecting and storing multiple templates for each subject in a large database (associated with various expressions) is not practical. We present a facial surface modeling and matching scheme to match 2.5D test scans in the presence of both non-rigid deformations and large pose changes (multiview) to a neutral expression 3D face model. A geodesic-based resampling approach is applied to extract landmarks for modeling facial surface deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A personspecific (3D) deformable model is built for each subject in the gallery w.r.t. the control group by combining the templates with synthesized deformations. By fitting this generative deformable model to a test scan, the proposed approach is able to handle expressions and large pose changes simultaneously. Experimental results demonstrate that the proposed matching scheme based on deformation modeling improves the matching accuracy.