Two-dimensional maximum margin feature extraction for face recognition

  • Authors:
  • Wen-Hui Yang;Dao-Qing Dai

  • Affiliations:
  • Center for Computer Vision and Department of Mathematics, Faculty of Mathematics and Computing, Sun Yat-Sen University, Guangzhou, China;Center for Computer Vision and Department of Mathematics, Faculty of Mathematics and Computing, Sun Yat-Sen University, Guangzhou, China

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on cybernetics and cognitive informatics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

On face recognition, most previous works on dimensionality reduction and classification would first transform the input image into 1-D vector, which ignores the underlying data structure and often leads to the small sample size problem. More recently, 2-D discriminant analysis has become an interesting technique which can overcome the aforementioned drawbacks. However, 2-D methods extract features based on the rows or the columns of all images, so it is possible that the features using 2-D methods still contain some redundant information. In addition, most existing 2-D methods cannot provide an automatic strategy to choose discriminant vectors. In this paper, we study the combination of 2-D discriminant analysis and 1-D discriminant analysis and propose a two-stage framework: "(2D)2MMC + LDA." Because the extracted features based on maximal margin criterion (MMC) is robust, stable, and efficient, in the first stage, a 2-D two-directional feature extraction technique, (2D)2MMC, is presented. In the second stage, the linear discriminant analysis (LDA) step is performed in the (2D)2MMC subspace. Experiments with Feret, Olivetti and Oracle Research Laboratory, and Carnegie Mellon University Pose, Illumination, and Expression databases are conducted to evaluate our method in terms of classification accuracy and robustness.