Partial matching of interpose 3D facial data for face recognition

  • Authors:
  • P. Perakis;G. Passalis;T. Theoharis;G. Toderici;I. A. Kakadiaris

  • Affiliations:
  • Computer Graphics Laboratory, Department of Informatics and Telecommunications, University of Athens, Ilisia, Greece and Computational Biomedicine Lab, Department of Computer Science, University o ...;Computer Graphics Laboratory, Department of Informatics and Telecommunications, University of Athens, Ilisia, Greece and Computational Biomedicine Lab, Department of Computer Science, University o ...;Computer Graphics Laboratory, Department of Informatics and Telecommunications, University of Athens, Ilisia, Greece and Computational Biomedicine Lab, Department of Computer Science, University o ...;Computational Biomedicine Lab, Department of Computer Science, University of Houston, Texas;Computational Biomedicine Lab, Department of Computer Science, University of Houston, Texas

  • Venue:
  • BTAS'09 Proceedings of the 3rd IEEE international conference on Biometrics: Theory, applications and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Three-dimensional face recognition has lately received much attention due to its robustness in the presence of lighting and pose variations. However, certain pose variations often result in missing facial data. This is common in realistic scenarios, such as uncontrolled environments and uncooperative subjects. Most previous 3D face recognition methods do not handle extensive missing data as they rely on frontal scans. Currently, there is no method to perform recognition across scans of different poses. A unified method that addresses the partial matching problem is proposed. Both frontal and side (left or right) facial scans are handled in a way that allows interpose retrieval operations. The main contributions of this paper include a novel 3D landmark detector and a deformable model framework that supports symmetric fitting. The landmark detector is utilized to detect the pose of the facial scan. This information is used to mark areas of missing data and to roughly register the facial scan with an Annotated Face Model (AFM). The AFM is fitted using a deformable model framework that introduces the method of exploiting facial symmetry where data are missing. Subsequently, a geometry image is extracted from the fitted AFM that is independent of the original pose of the facial scan. Retrieval operations, such as face identification, are then performed on a wavelet domain representation of the geometry image. Thorough testing was performed by combining the largest publicly available databases. To the best of our knowledge, this is the first method that handles side scans with extensive missing data (e.g., up to half of the face missing).