Discriminant learning for face recognition

  • Authors:
  • K. N. Plataniotis;A. N. Venetsanopoulos;Juwei Lu

  • Affiliations:
  • -;-;-

  • Venue:
  • Discriminant learning for face recognition
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

An issue of paramount importance in the development of a cost-effective face recognition (FR) system is the determination of low-dimensional, intrinsic face feature representation with enhanced discriminatory power. It is well-known that the distribution of face images, under a perceivable variation in viewpoint, illumination or facial expression, is highly non convex and complex. In addition, the number of available training samples is usually much smaller than the dimensionality of the sample space, resulting in the well documented “small sample size” (SSS) problem. It is therefore not surprising that traditional linear feature extraction techniques, such as Principal Component Analysis, often fail to provide reliable and robust solutions to FR problems under realistic application scenarios. In this research, pattern recognition methods are integrated with emerging machine learning approaches, such as kernel and boosting methods, in an attempt to overcome the technical limitations of existing FR methods. To this end, a simple but cost-effective linear discriminant learning method is first introduced. The method is proven to be robust against the SSS problem. Next, the linear solution is integrated together with Bayes classification theory, resulting in a more general quadratic discriminant learning method. The assumption behind both the linear and quadratic solutions is that face patterns under learning are subject to Gaussian distributions. To break through the limitation, a globally nonlinear discriminant learning algorithm was then developed by utilizing kernel machines to kernelize the proposed linear solution. In addition, two ensemble-based discriminant learning algorithms are introduced to address not only nonlinear but also large-scale FR problems often encountered in practice. The first one is based on the cluster analysis concept with a novel separability criterion instead of traditional similarity criterion employed in such methods as K-means. The second one is a novel boosting-based learning method developed by incorporating the proposed linear discriminant solution into an improved AdaBoost framework. Extensive experimentation using well-known data sets such as the ORL, UMIST and FERET databases was carried out to demonstrate the performance of all the methods presented in this thesis.