Nonlinear Discriminant Analysis on Embedded Manifold

  • Authors:
  • Shuicheng Yan;Yuxiao Hu;Dong Xu;Hong-Jiang Zhang;B. Zhang;Qiansheng Cheng

  • Affiliations:
  • Beckman Inst., Univ. of Illinois, Urbana-Champaign, IL;-;-;-;-;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional manifold learning algorithms, such as ISOMAP, LLE, and Laplacian Eigenmap, mainly focus on uncovering the latent low-dimensional geometry structure of the training samples in an unsupervised manner where useful class information is ignored. Therefore, the derived low-dimensional representations are not necessarily optimal in discriminative capability. In this paper, we study the discriminant analysis problem by considering the nonlinear manifold structure of data space. To this end, firstly, a new clustering algorithm, called Intra-Cluster Balanced K-Means (ICBKM), is proposed to partition the samples into multiple clusters while ensure that there are balanced samples for the classes within each cluster; approximately, each cluster can be considered as a local patch on the embedded manifold. Then, the local discriminative projections for different clusters are simultaneously calculated by optimizing the global Fisher Criterion based on the cluster weighted data representation. Compared with traditional linear/kernel discriminant analysis (KDA) algorithms, our proposed algorithm has the following characteristics: 1) it essentially is a KDA algorithm with specific geometry-adaptive-kernel tailored to the specific data structure, in contrast to traditional KDA in which the kernel is fixed and independent to the data set; 2) it is approximately a locally linear while globally nonlinear discriminant analyzer; 3) it does not need to store the original samples for computing the low-dimensional representation of a new data; and 4) it is computationally efficient compared with traditional KDA when the sample number is large. The toy problem on artificial data demonstrates the effectiveness of our proposed algorithm in deriving discriminative representations for problems with nonlinear classification hyperplane. The face recognition experiments on YALE and CMU PIE databases show that our proposed algorithm significantly outperforms linear discriminant analysis (LD- - A) as well as Mixture LDA, and has higher accuracy than KDA with traditional kernels