On the Theoretical and Computational Analysis between SDA and Lap-LDA

  • Authors:
  • Mingbo Zhao;Zhao Zhang;Tommy W. S. Chow

  • Affiliations:
  • -;-;-

  • Venue:
  • ICTAI '12 Proceedings of the 2012 IEEE 24th International Conference on Tools with Artificial Intelligence - Volume 01
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Semi-supervised dimensionality reduction is an important research topic in many pattern recognition and machine learning applications. Among all the methods for semi-supervised dimensionality reduction, SDA and Lap-LDA are two popular ones. Both SDA and Lap-LDA can perform dimensionality reduction by preserving the discriminative structure embedding in the labeled samples as well as the manifold structure embedded both in labeled and unlabeled samples. But they apply different schemes for semi-supervised dimensionality reduction. SDA has added the manifold term to the objective function of LDA while Lap-LDA has added such term to the objective function of Least Square with certain class indicator. In this paper, we further analyze the schemes of two methods and build the equivalence between them by giving a certain condition. We then show their difference when the certain condition cannot be satisfied. Extensive simulations have been conducted based several datasets. Both theoretical analysis and simulation results confirm the analysis. Finally, motivated by the equivalence and differences between two methods, we then propose an improved approach for semi-supervised dimensionality reduction. The proposed approach is actually a two-stage approach and can obtain the optimal solution equivalent to Lap-LDA (in this first stage) and SDA (in the second stage) with less computational cost.