Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
An introduction to variable and feature selection
The Journal of Machine Learning Research
Ranking a random feature for variable and feature selection
The Journal of Machine Learning Research
The CMU Pose, Illumination, and Expression Database
IEEE Transactions on Pattern Analysis and Machine Intelligence
Where Are Linear Feature Extraction Methods Applicable?
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Journal of Machine Learning Research
Rapid and brief communication: Laplacian linear discriminant analysis
Pattern Recognition
On the information and representation of non-Euclidean pairwise data
Pattern Recognition
Feature extraction based on Laplacian bidirectional maximum margin criterion
Pattern Recognition
Feature selection for multi-label naive Bayes classification
Information Sciences: an International Journal
5th Annual International Conference on Mobile and Ubiquitous Systems: Computing, Networking, and Services
Laplacian Linear Discriminant Analysis Approach to Unsupervised Feature Selection
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Nonlinear embedding preserving multiple local-linearities
Pattern Recognition
Adaptive nonlinear manifolds and their applications to pattern recognition
Information Sciences: an International Journal
IEEE Transactions on Information Theory - Special issue on information theory in molecular biology and neuroscience
Discriminative components of data
IEEE Transactions on Neural Networks
Efficient and robust feature extraction by maximum margin criterion
IEEE Transactions on Neural Networks
Feature subset selection using separability index matrix
Information Sciences: an International Journal
Using the idea of the sparse representation to perform coarse-to-fine face recognition
Information Sciences: an International Journal
Hi-index | 0.07 |
Linear Discriminant Analysis (LDA) has been widely used to extract linear features for classification. In real applications, the usefulness of the extracted features usually needs to be confirmed using an error rate of classification embedded in a classifier. Little attention has been paid to whether and how discriminative features themselves can be interpreted as indicators of usefulness. We refer to this as relevance, i.e., the capability of discriminative features to characterize the contribution of the original variables to classification. We approach the relevance by considering how it could be lost while extracting optimal discriminative features. Then, the discrepancy between the relevance and optimality of discriminative features is shown to originate from the ''angle'' between the space spanned by eigenvectors of the within-class scattering matrix, and the primary space in which the original variables reside. In particular, for a given dataset, the larger the ''angle'', the less evident is the relevance discovered from optimal discriminative features. Furthermore, the relevance and optimality are regarded as two constraint conditions, or a tradeoff, in order to extract relevant-discriminative features. At last, a simulated experiment is used to show how the relevance is lost when the ''angle'' is changed. Experimental results based on both USPS handwritten digitals and PIE face databases show that a maximum margin criterion is a reasonable compromise between the relevance and optimality, since it approximates the averaged class margin using Euclidean distance measured in the primary space.