Feature extraction based on Laplacian bidirectional maximum margin criterion
Pattern Recognition
Spherical discriminant analysis in semi-supervised speaker clustering
NAACL-Short '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
Fast Haar transform based feature extraction for face representation and recognition
IEEE Transactions on Information Forensics and Security
A non-parametric approach to automatic change detection in MRI images of the brain
ISBI'09 Proceedings of the Sixth IEEE international conference on Symposium on Biomedical Imaging: From Nano to Macro
Discriminant subspace analysis: an adaptive approach for image classification
IEEE Transactions on Multimedia
Active reranking for web image search
IEEE Transactions on Image Processing
Weak metric learning for feature fusion towards perception-inspired object recognition
MMM'10 Proceedings of the 16th international conference on Advances in Multimedia Modeling
Twin support vector machines and subspace learning methods for microcalcification clusters detection
Engineering Applications of Artificial Intelligence
Neurocomputing
Face recognition using scale-adaptive directional and textural features
Pattern Recognition
Hi-index | 0.14 |
Beyond linear and kernel-based feature extraction, we propose in this paper the generalized feature extraction formulation based on the so-called Graph Embedding framework. Two novel correlation metric based algorithms are presented based on this formulation. Correlation Embedding Analysis (CEA), which incorporates both correlational mapping and discriminating analysis, boosts the discriminating power by mapping data from a high-dimensional hypersphere onto another low-dimensional hypersphere and preserving the intrinsic neighbor relations with local graph modeling. Correlational Principal Component Analysis (CPCA) generalizes the conventional Principal Component Analysis (PCA) algorithm to the case with data distributed on a high-dimensional hypersphere. Their advantages stem from two facts: 1) tailored to normalized data, which are often the outputs from the data preprocessing step, and 2) directly designed with correlation metric, which shows to be generally better than Euclidean distance for classification purpose. Extensive comparisons with existing algorithms on visual classification experiments demonstrate the effectiveness of the proposed methods.