On Optimizing Subclass Discriminant Analysis Using a Pre-clustering Technique

  • Authors:
  • Sang-Woon Kim

  • Affiliations:
  • Senior Member, IEEE, Dept. of Computer Science and Engineering, Myongji University, Yongin, South Korea 449-728

  • Venue:
  • CIARP '08 Proceedings of the 13th Iberoamerican congress on Pattern Recognition: Progress in Pattern Recognition, Image Analysis and Applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Subclass Discriminant Analysis (SDA) [10] is a dimensionality reduction method that has been proven to be successful for different types of class distributions. The advantage of SDA is that since it does not treat the class-conditional distributions as uni-modal ones, the nonlinearly separable problems can be handled as linear ones. The problem with this strategy, however, is that to estimate the number of subclasses needed to represent the distribution of each class, i.e., to find out the best partition, all possible solutions should be verified. Therefore, this approach leads to an associated high computational cost. In this paper, we propose a method that optimizes the computational burden of SDA-based classification by simply reducing the number of classes to be examined through choosing a few classes of the training set prior to the execution of SDA. To select the classes to be partitioned, the intra-set distance is employed as a criterion and a k-means clustering is performed to divide them. Our experimental results for an artificial data set and two face databases demonstrate that the processing CPU-time of the optimized SDA could be reduced dramaticallywithout sacrificing either the classification accuracy or the computational complexity.