Recognition of expression variant faces from one sample image per enrolled subject

  • Authors:
  • Hamidreza Rashidy Kanan;Yongsheng Gao

  • Affiliations:
  • Electrical and Computer Engineering Department, Islamic Azad University, Qazvin, Iran;School of Engineering, Griffith University, Brisbane, QLD, Australia

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite remarkable progress on face recognition, little attention has been given to robustly recognize expression variant faces from a single sample image per person. One way to deal with the recognition of faces under above conditions is by using local statistical approaches which appear to be more robust against variations in facial expression. In this paper, we propose a new weighted matching method based on our recent work of AWPPZMA to recognize expression variant faces when only one exemplar image per enrolled subject is available. The proposed weighting method gives more significance to those parts of the face with facial expression variations that change less compared to neutral face image and less significance to those parts that change more. In this contribution, we use the difference between local area in the input face and its corresponding local area in the neutral face image as a measure of observable structure changes. The encouraging experimental results demonstrate that the proposed method provides a new solution to the problem of robustly recognizing expression variant faces in single model databases.