Learning effective intrinsic features to boost 3d-based face recognition

  • Authors:
  • Chenghua Xu;Tieniu Tan;Stan Li;Yunhong Wang;Cheng Zhong

  • Affiliations:
  • Center for Biometrics and Security Research (CBSR), & National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing, China;Center for Biometrics and Security Research (CBSR), & National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing, China;Center for Biometrics and Security Research (CBSR), & National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing, China;School of Computer Science and Engineering, Beihang University, Beijing, China;Center for Biometrics and Security Research (CBSR), & National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing, China

  • Venue:
  • ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

3D image data provide several advantages than 2D data for face recognition and overcome many problems with 2D intensity images based methods. In this paper, we propose a novel approach to 3D-based face recognition. First, a novel representation, called intrinsic features, is presented to encode local 3D shapes. It describes complementary non-relational features to provide an intrinsic representation of faces. This representation is extracted after alignment, and is invariant to translation, rotation and scale. Without reduction, tens of thousands of intrinsic features can be produced for a face, but not all of them are useful and equally important. Therefore, in the second part of the work, we introduce a learning method for learning most effective local features and combining them into a strong classifier using an AdaBoost learning procedure. Experimental results are performed on a large 3D face database obtained with complex illumination, pose and expression variations. The results demonstrate that the proposed approach produces consistently better results than existing methods.