Linear Discriminant Analysis for Two Classes via Removal of Classification Structure
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Deterministic Annealing Approach for Parsimonious Design of Piecewise Regression Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Two Variations on Fisher's Linear Discriminant for Pattern Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Optimal Pairwise Linear Classifiers for Normal Distributions: The Two-Dimensional Case
IEEE Transactions on Pattern Analysis and Machine Intelligence
Non-iterative Heteroscedastic Linear Dimension Reduction for Two-Class Data
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Selecting the best hyperplane in the framework of optimal pairwise linear classifiers
Pattern Recognition Letters
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition, Third Edition
Pattern Recognition, Third Edition
On the performance of chernoff-distance-based linear dimensionality reduction techniques
AI'06 Proceedings of the 19th international conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence
Hi-index | 0.00 |
A theoretical analysis for comparing two linear dimensionality reduction (LDR) techniques, namely Fisher's discriminant (FD) and Loog-Duin (LD) dimensionality reduciton, is presented. The necessary and sufficient conditions for which FD and LD provide the same linear transformation are discussed and proved. To derive these conditions, it is first shown that the two criteria preserve the same maximum value after a diagonalization process is applied, and then the necessary and sufficient conditions for various cases, including coincident covariance matrices, coincident prior probabilities, and for when one of the covariances is the identity matrix. A measure for comparing the two criteria is derived from the necessary and sufficient conditions, and used to empirically show that the conditions are statistically related to the classification error for a post-processing quadratic classifier and the Chernoff distance in the transformed space.