Lighting aware preprocessing for face recognition across varying illumination

  • Authors:
  • Hu Han;Shiguang Shan;Laiyun Qing;Xilin Chen;Wen Gao

  • Affiliations:
  • Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, CAS, Beijing, China and Graduate University of Chinese Academy of Sciences, Beijing ...;Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, CAS, Beijing, China;Graduate University of Chinese Academy of Sciences, Beijing, China;Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, CAS, Beijing, China;Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, CAS, Beijing, China and Institute of Digital Media, Peking University, Beijing, Chi ...

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Illumination variation is one of intractable yet crucial problems in face recognition and many lighting normalization approaches have been proposed in the past decades. Nevertheless, most of them preprocess all the face images in the same way thus without considering the specific lighting in each face image. In this paper, we propose a lighting aware preprocessing (LAP) method, which performs adaptive preprocessing for each testing image according to its lighting attribute. Specifically, the lighting attribute of a testing face image is first estimated by using spherical harmonic model. Then, a von Mises-Fisher (vMF) distribution learnt from a training set is exploited to model the probability that the estimated lighting belongs to normal lighting. Based on this probability, adaptive preprocessing is performed to normalize the lighting variation in the input image. Extensive experiments on Extended YaleB and Multi-PIE face databases show the effectiveness of our proposed method.