Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
Pattern Recognition Letters
A neural implementation of canonical correlation analysis
Neural Networks
Kernel Canonical Correlation Analysis and Least Squares Support Vector Machines
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Shape classification using smooth principal components
Pattern Recognition Letters
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis
The Journal of Machine Learning Research
Cost-conscious multiple kernel learning
Pattern Recognition Letters
Canonical correlation analysis for multiview semisupervised feature extraction
ICAISC'10 Proceedings of the 10th international conference on Artificial intelligence and soft computing: Part I
Reduced Support Vector Machines: A Statistical Theory
IEEE Transactions on Neural Networks
Combining Spatial Proximity and Temporal Continuity for Learning Invariant Representations
ASONAM '12 Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012)
Ensemble canonical correlation analysis
Applied Intelligence
Hi-index | 0.10 |
Fisher's linear discriminant analysis (LDA) is one of the most popular supervised linear dimensionality reduction methods. Unfortunately, LDA is not suitable for problems where the class labels are not available and only the spatial or temporal association of data samples is implicitly indicative of class membership. In this study, a new strategy for reducing LDA to Hotelling's canonical correlation analysis (CCA) is proposed. CCA seeks prominently correlated projections between two views of data and it has been long known to be equivalent to LDA when the data features are used in one view and the class labels are used in the other view. The basic idea of the new equivalence between LDA and CCA, which we call within-class coupling CCA (WCCCA), is to apply CCA to pairs of data samples that are most likely to belong to the same class. We prove the equivalence between LDA and such an application of CCA. With such an implicit representation of the class labels, WCCCA is applicable both to regular LDA problems and to problems in which only spatial and/or temporal continuity provides clues to the class labels.