Matrix computations (3rd ed.)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to variable and feature selection
The Journal of Machine Learning Research
The CMU Pose, Illumination, and Expression Database
IEEE Transactions on Pattern Analysis and Machine Intelligence
Convex Optimization
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Generalized spectral bounds for sparse LDA
ICML '06 Proceedings of the 23rd international conference on Machine learning
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Spectral Regression: A Unified Approach for Sparse Subspace Learning
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
Convex multi-task feature learning
Machine Learning
Joint covariate selection and joint subspace selection for multiple classification problems
Statistics and Computing
Large Margin Subspace Learning for feature selection
Pattern Recognition
Exploiting fisher and fukunaga-koontz transforms in chernoff dimensionality reduction
ACM Transactions on Knowledge Discovery from Data (TKDD)
Joint clustering and feature selection
WAIM'13 Proceedings of the 14th international conference on Web-Age Information Management
Robust unsupervised feature selection
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Dimensionality reduction is a very important topic in machine learning. It can be generally classified into two categories: feature selection and subspace learning. In the past decades, many methods have been proposed for dimensionality reduction. However, most of these works study feature selection and subspace learning independently. In this paper, we present a framework for joint feature selection and subspace learning. We reformulate the subspace learning problem and use L2,1-norm on the projection matrix to achieve row-sparsity, which leads to selecting relevant features and learning transformation simultaneously. We discuss two situations of the proposed framework, and present their optimization algorithms. Experiments on benchmark face recognition data sets illustrate that the proposed framework outperforms the state of the art methods overwhelmingly.