Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Using Discriminant Eigenfeatures for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
SVM vs Regularized Least Squares Classification
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face recognition using kernel direct discriminant analysis algorithms
IEEE Transactions on Neural Networks
Least squares linear discriminant analysis
Proceedings of the 24th international conference on Machine learning
ACM Transactions on Knowledge Discovery from Data (TKDD)
Cross domain distribution adaptation via kernel mapping
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Interactive natural image segmentation via spline regression
IEEE Transactions on Image Processing
Hi-index | 0.00 |
Linear discriminant analysis (LDA) as a dimension reduction method is widely used in data mining and machine learning. It however suffers from the small sample size (SSS) problem when data dimensionality is greater than the sample size. Many modified methods have been proposed to address some aspect of this difficulty from a particular viewpoint. A comprehensive framework that provides a complete solution to the SSS problem is still missing. In this paper, we provide a unified approach to LDA, and investigate the SSS problem in the framework of statistical learning theory. In such a unified approach, our analysis results in a deeper understanding of LDA. We demonstrate that LDA (and its nonlinear extension) belongs to the same framework where powerful classifiers such as support vector machines (SVMs) are formulated. In addition, this approach allows us to establish an error bound for LDA. Finally our experiments validate our theoretical analysis results.