Topics in matrix analysis
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Novel Methods for Subset Selection with Respect to Problem Knowledge
IEEE Intelligent Systems
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Feature Subset Selection and Order Identification for Unsupervised Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Theoretical and Empirical Analysis of ReliefF and RReliefF
Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Margin based feature selection - theory and algorithms
ICML '04 Proceedings of the twenty-first international conference on Machine learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization
ICML '06 Proceedings of the 23rd international conference on Machine learning
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hybrid huberized support vector machines for microarray classification
Proceedings of the 24th international conference on Machine learning
Hybrid huberized support vector machines for microarray classification
Proceedings of the 24th international conference on Machine learning
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Trace ratio criterion for feature selection
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Optimal feature selection for support vector machines
Pattern Recognition
Multi-task feature learning via efficient l2, 1-norm minimization
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Local-Learning-Based Feature Selection for High-Dimensional Data Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph optimization for dimensionality reduction with sparsity constraints
Pattern Recognition
A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dimensionality reduction by Mixed Kernel Canonical Correlation Analysis
Pattern Recognition
Joint feature selection and subspace learning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Self-taught dimensionality reduction on the high-dimensional small-sized data
Pattern Recognition
Stochastic margin-based structure learning of Bayesian network classifiers
Pattern Recognition
On Similarity Preserving Feature Selection
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.01 |
Recent research has shown the benefits of large margin framework for feature selection. In this paper, we propose a novel feature selection algorithm, termed as Large Margin Subspace Learning (LMSL), which seeks a projection matrix to maximize the margin of a given sample, defined as the distance between the nearest missing (the nearest neighbor with the different label) and the nearest hit (the nearest neighbor with the same label) of the given sample. Instead of calculating the nearest neighbor of the given sample directly, we treat each sample with different (same) labels with the given sample as a potential nearest missing (hint), with the probability estimated by kernel density estimation. By this way, the nearest missing (hint) is calculated as an expectation of all different (same) class samples. In order to perform feature selection, an @?"2","1-norm is imposed on the projection matrix to enforce row-sparsity. An efficient algorithm is then proposed to solve the resultant optimization problem. Comprehensive experiments are conducted to compare the performance of the proposed algorithm with the other five state-of-the-art algorithms RFS, SPFS, mRMR, TR and LLFS, it achieves better performance than the former four. Compared with the algorithm LLFS, the proposed algorithm has a competitive performance with however a significantly faster computational.