A practical approach to feature selection
ML92 Proceedings of the ninth international workshop on Machine learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Feature Selection for Machine Learning: Comparing a Correlation-Based Filter Approach to the Wrapper
Proceedings of the Twelfth International Florida Artificial Intelligence Research Society Conference
An introduction to variable and feature selection
The Journal of Machine Learning Research
Theoretical Comparison between the Gini Index and Information Gain Criteria
Annals of Mathematics and Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization
ICML '06 Proceedings of the 23rd international conference on Machine learning
A review of feature selection techniques in bioinformatics
Bioinformatics
Extremely fast text feature extraction for classification and indexing
Proceedings of the 17th ACM conference on Information and knowledge management
Joint covariate selection and joint subspace selection for multiple classification problems
Statistics and Computing
Towards Structural Sparsity: An Explicit l2/l0 Approach
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
Multi-Class L2,1-Norm Support Vector Machine
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
In this paper, we propose a novel robust and pragmatic feature selection approach. Unlike those sparse learning based feature selection methods which tackle the approximate problem by imposing sparsity regularization in the objective function, the proposed method only has one l2,1-norm loss term with an explicit l2,0-Norm equality constraint. An efficient algorithm based on augmented Lagrangian method will be derived to solve the above constrained optimization problem to find out the stable local solution. Extensive experiments on four biological datasets show that although our proposed model is not a convex problem, it outperforms the approximate convex counterparts and state-of-art feature selection methods evaluated in terms of classification accuracy by two popular classifiers. What is more, since the regularization parameter of our method has the explicit meaning, i.e. the number of feature selected, it avoids the burden of tuning the parameter, making it a pragmatic feature selection method.