Similarity-based Fisherfaces

  • Authors:
  • David Delgado-Gomez;Jens Fagertun;Bjarne Ersbøll;Federico M. Sukno;Alejandro F. Frangi

  • Affiliations:
  • Department of Signal Theory and Communications, Carlos III University, Madrid, Spain;Informatics and Mathematical Modelling, Technical University of Denmark, Denmark;Informatics and Mathematical Modelling, Technical University of Denmark, Denmark;Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain and Information and Communication Technologies Department, Universitat Pompeu Fabra, Barce ...;Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain and Information and Communication Technologies Department, Universitat Pompeu Fabra, Barce ...

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2009

Quantified Score

Hi-index 0.10

Visualization

Abstract

In this article, a face recognition algorithm aimed at mimicking the human ability to differentiate people is proposed. For each individual, we first compute a projection line that maximizes his or her dissimilarity to all other people in the user database. Facial identity is thus encoded in the dissimilarity pattern composed by all the projection coefficients of an individual against all other enrolled user identities. Facial recognition is achieved by calculating the dissimilarity pattern of an unknown individual with that of each enrolled user. As the proposed algorithm is composed of different one-dimensional projection lines, it easily allows adding or removing users by simply adding or removing the corresponding projection lines in the system. Ideally, to minimize the influence of these additions/removals, the user group should be representative enough of the general population. Experiments on three widely used databases (XM2VTS, AR and Equinox) show consistently good results. The proposed algorithm achieves Equal Error Rate (EER) and Half-Total Error Rate (HTER) values in the ranges of 0.41-1.67% and 0.1-1.95%, respectively. Our approach yields results comparable to the top two winners in recent contests reported in the literature.