Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
A practical approach to feature selection
ML92 Proceedings of the ninth international workshop on Machine learning
Probabilistic Visual Learning for Object Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Solving the Small Sample Size Problem of LDA
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Two-Stage Linear Discriminant Analysis via QR-Decomposition
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Journal of Machine Learning Research
Optimal dimensionality of metric space for classification
Proceedings of the 24th international conference on Machine learning
Application of the Karhunen-Loève Expansion to Feature Selection and Ordering
IEEE Transactions on Computers
Discriminant Subspace Analysis: A Fukunaga-Koontz Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Bayes Optimality in Linear Discriminant Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Geometric Mean for Subspace Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
On information and distance measures, error bounds, and feature selection
Information Sciences: an International Journal
Max-Min Distance Analysis by Using Sequential SDP Relaxation for Dimension Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear discriminant dimensionality reduction
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Extending kernel fisher discriminant analysis with the weighted pairwise chernoff criterion
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
Joint feature selection and subspace learning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Generalizing discriminant analysis using the generalized singular value decomposition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient and robust feature extraction by maximum margin criterion
IEEE Transactions on Neural Networks
Weighted Piecewise LDA for Solving the Small Sample Size Problem in Face Verification
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Knowledge discovery from big data demands effective representation of data. However, big data are often characterized by high dimensionality, which makes knowledge discovery more difficult. Many techniques for dimensionality reudction have been proposed, including well-known Fisher's Linear Discriminant Analysis (LDA). However, the Fisher criterion is incapable of dealing with heteroscedasticity in the data. A technique based on the Chernoff criterion for linear dimensionality reduction has been proposed that is capable of exploiting heteroscedastic information in the data. While the Chernoff criterion has been shown to outperform the Fisher's, a clear understanding of its exact behavior is lacking. In this article, we show precisely what can be expected from the Chernoff criterion. In particular, we show that the Chernoff criterion exploits the Fisher and Fukunaga-Koontz transforms in computing its linear discriminants. Furthermore, we show that a recently proposed decomposition of the data space into four subspaces is incomplete. We provide arguments on how to best enrich the decomposition of the data space in order to account for heteroscedasticity in the data. Finally, we provide experimental results validating our theoretical analysis.