Matrix analysis
Integer and combinatorial optimization
Integer and combinatorial optimization
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
An introduction to variable and feature selection
The Journal of Machine Learning Research
Fast Branch & Bound Algorithms for Optimal Feature Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Full regularization path for sparse principal component analysis
Proceedings of the 24th international conference on Machine learning
The Journal of Machine Learning Research
Optimal Solutions for Sparse Principal Component Analysis
The Journal of Machine Learning Research
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Speed up kernel discriminant analysis
The VLDB Journal — The International Journal on Very Large Data Bases
Linear discriminant dimensionality reduction
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Beyond sparsity: The role of L1-optimizer in pattern classification
Pattern Recognition
Eigenvalue techniques for convex objective, nonconvex optimization problems
IPCO'10 Proceedings of the 14th international conference on Integer Programming and Combinatorial Optimization
Joint feature selection and subspace learning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Truncated power method for sparse eigenvalue problems
The Journal of Machine Learning Research
Weighted discriminative sparsity preserving embedding for face recognition
Knowledge-Based Systems
Hi-index | 0.01 |
We present a discrete spectral framework for the sparse or cardinality-constrained solution of a generalized Rayleigh quotient. This NP-hard combinatorial optimization problem is central to supervised learning tasks such as sparse LDA, feature selection and relevance ranking for classification. We derive a new generalized form of the Inclusion Principle for variational eigenvalue bounds, leading to exact and optimal sparse linear discriminants using branch-and-bound search. An efficient greedy (approximate) technique is also presented. The generalization performance of our sparse LDA algorithms is demonstrated with real-world UCI ML benchmarks and compared to a leading SVM-based gene selection algorithm for cancer classification.