Effects of Sample Size in Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Probabilistic Visual Learning for Object Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiclass Spectral Clustering
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Maximum likelihood discriminant feature spaces
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
Discriminative cluster analysis
ICML '06 Proceedings of the 23rd international conference on Machine learning
Discriminant Subspace Analysis: A Fukunaga-Koontz Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Affine Feature Extraction: A Generalization of the Fukunaga-Koontz Transformation
MLDM '07 Proceedings of the 5th international conference on Machine Learning and Data Mining in Pattern Recognition
Affine feature extraction: A generalization of the Fukunaga-Koontz transformation
Engineering Applications of Artificial Intelligence
Robust linear dimensionality reduction for hypothesis testing with application to sensor selection
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Artificial Intelligence Review
Learning discriminative features for fast frame-based action recognition
Pattern Recognition
Hi-index | 0.00 |
Linear discriminant analysis (LDA) has been an active topic of research during the last century. However, the existing algorithms have several limitations when applied to visual data. LDA is only optimal for Gaussian distributed classes with equal covariance matrices, and only classes-1 features can be extracted. On the other hand, LDA does not scale well to high dimensional data (overfitting), and it cannot handle optimally multimodal distributions. In this paper, we introduce Multimodal Oriented Discriminant Analysis (MODA), a LDA extension which can overcome these drawbacks. A new formulation and several novelties are proposed:• An optimal dimensionality reduction for multimodal Gaussian classes with different covariances is derived. The new criteria allows for extracting more than classes-1 features.• A covariance approximation is introduced to improve generalization and avoid over-fitting when dealing with high dimensional data.• A linear time iterative majorization method is suggested in order to find a local optimum.Several synthetic and real experiments on face recognition show that MODA outperform existing linear techniques.