Linear optimization and extensions: theory and algorithms
Linear optimization and extensions: theory and algorithms
Steepest-edge simplex algorithms for linear programming
Mathematical Programming: Series A and B
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
An equivalence between sparse approximation and support vector machines
Neural Computation
Combining support vector and mathematical programming methods for classification
Advances in kernel methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Linear Programming Boosting via Column Generation
Machine Learning
Machine Learning
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Generalisation Error Bounds for Sparse Linear Classifiers
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
A Unified Framework for Regularization Networks and Support Vector Machines
A Unified Framework for Regularization Networks and Support Vector Machines
Exact simplification of support vector solutions
The Journal of Machine Learning Research
Dimensionality reduction via sparse support vector machines
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Sparseness of support vector machines
The Journal of Machine Learning Research
A Feature Selection Newton Method for Support Vector Machine Classification
Computational Optimization and Applications
A Sparse Support Vector Machine Approach to Region-Based Image Categorization
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
Deterministic constructions of compressed sensing matrices
Journal of Complexity
Computational Optimization and Applications
Hidden space principal component analysis
PAKDD'06 Proceedings of the 10th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
Matching pursuits with time-frequency dictionaries
IEEE Transactions on Signal Processing
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Hidden space support vector machines
IEEE Transactions on Neural Networks
Sparse ensembles using weighted combination methods based on linear programming
Pattern Recognition
A fast algorithm for kernel 1-norm support vector machines
Knowledge-Based Systems
1-norm support vector novelty detection and its sparseness
Neural Networks
Hi-index | 0.00 |
There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis.