A facial sparse descriptor for single image based face recognition

  • Authors:
  • Na Liu;Jian-Huang Lai;Wei-Shi Zheng

  • Affiliations:
  • School of Mathematics and Computational Science, Sun Yat-sen University, Guangzhou, China;School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China;School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Single image based face recognition under different variations such as occlusion, expression and pose has been recognized as an important task in many real-world applications. The popularly widely used holistic features are easily distorted due to occlusion and some other variations. In order to tackle this problem, the sparse local feature descriptor based recognition methods have become more and more important, and promising performance is obtained. The recently developed SIFT, which detects feature points sparsely and extracts feature locally for object matching between different views and scales, can also benefit single image based face recognition. However, we find in this paper that SIFT should not be directly used for face recognition, because face recognition differs from generic object matching. To this end, we develop a new framework for detecting feature keypoints sparsely, describing feature context and matching feature points between two face images. We call this new proposed framework as Facial Sparse Descriptor (FSD). Experiments are conducted to support our analysis of SIFT, and extensive experiments are also presented to validate the proposed FSD against SIFT and its two variants, two dense local feature descriptor (i.e., LBP and HoG), PCA and Gabor based methods on AR, CMU and FERET databases.