Using Kinect for face recognition under varying poses, expressions, illumination and disguise

  • Authors:
  • Wanquan Liu;Ajmal S. Mian;Aneesh Krishna;Billy Y. L. Li

  • Affiliations:
  • Curtin University Bentley, Western Australia;The University of Western Australia Crawley, Western Australia;Curtin University Bentley, Western Australia;Curtin University Bentley, Western Australia

  • Venue:
  • WACV '13 Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an algorithm that uses a low resolution 3D sensor for robust face recognition under challenging conditions. A preprocessing algorithm is proposed which exploits the facial symmetry at the 3D point cloud level to obtain a canonical frontal view, shape and texture, of the faces irrespective of their initial pose. This algorithm also fills holes and smooths the noisy depth data produced by the low resolution sensor. The canonical depth map and texture of a query face are then sparse approximated from separate dictionaries learned from training data. The texture is transformed from the RGB to Discriminant Color Space before sparse coding and the reconstruction errors from the two sparse coding steps are added for individual identities in the dictionary. The query face is assigned the identity with the smallest reconstruction error. Experiments are performed using a publicly available database containing over 5000 facial images (RGB-D) with varying poses, expressions, illumination and disguise, acquired using the Kinect sensor. Recognition rates are 96.7% for the RGB-D data and 88.7% for the noisy depth data alone. Our results justify the feasibility of low resolution 3D sensors for robust face recognition.