PDSS: patch-descriptor-similarity space for effective face verification

  • Authors:
  • Xiaohua Zhai;Yuxin Peng;Jianguo Xiao

  • Affiliations:
  • Peking University, Beijing, China;Peking University, Beijing, China;Peking University, Beijing, China

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose the Patch-Descriptor-Similarity Space (PDSS) for unconstrained face verification, which is challenging due to image variations in pose, lighting, facial expression, and occlusion. Our proposed PDSS considers jointly patch, descriptor and similarity measure, which are ignored by the existing work. PDSS is extremely effective for face verification because each axis of PDSS will boost each other and could maximize the effect of every axis. Each point in PDSS reflects a distinct partial-matching between two facial images, which could be robust to variations in the facial images. Moreover, by selecting the discriminating point subset from PDSS, we could describe accurately the characteristic similarities and differences between two facial images, and further decide whether they represent the same person. In PDSS, each axis can describe effectively the distinct features of the faces: each patch (the first axis) reflects a distinct trait of a face; the descriptor (the second axis) is used to describe such face trait; and the similarity between two features can be measured by a certain kind of similarity measure (the third axis). The experiment adopts the extensively-used Labeled Face in the Wild (LFW) unconstrained face recognition dataset (13K faces), and our proposed PDSS approach achieves the best result, compared with the state-of-the-art methods.