Unified formulation of linear discriminant analysis methods and optimal parameter selection

  • Authors:
  • Senjian An;Wanquan Liu;Svetha Venkatesh;Hong Yan

  • Affiliations:
  • Department of Computing, Curtin University of Technology, WA 6102, Australia;Department of Computing, Curtin University of Technology, WA 6102, Australia;Department of Computing, Curtin University of Technology, WA 6102, Australia;Department of Electronic Engineering, City University of Hong Kong, Kowloon, Hong Kong and The School of Electrical and Information Engineering, University of Sydney, NSW 2006, Australia

  • Venue:
  • Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

In the last decade, many variants of classical linear discriminant analysis (LDA) have been developed to tackle the under-sampled problem in face recognition. However, choosing the variants is not easy since these methods involve eigenvalue decomposition that makes cross-validation computationally expensive. In this paper, we propose to solve this problem by unifying these LDA variants in one framework: principal component analysis (PCA) plus constrained ridge regression (CRR). In CRR, one selects the target (also called class indicator) for each class, and finds a projection to locate the class centers at their class targets and the transform minimizes the within-class distances with a penalty on the transform norm as in ridge regression. Under this framework, many existing LDA methods can be viewed as PCA+CRR with particular regularization numbers and class indicators and we propose to choose the best LDA method as choosing the best member from the CRR family. The latter can be done by comparing their leave-one-out (LOO) errors and we present an efficient algorithm, which requires similar computations to the training process of CRR, to evaluate the LOO errors. Experiments on Yale Face B, Extended Yale B and CMU-PIE databases are conducted to demonstrate the effectiveness of the proposed methods.