A new result in the theory and computation of the least-norm solution of a linear program
Journal of Optimization Theory and Applications
The nature of statistical learning theory
The nature of statistical learning theory
Parallel Gradient Distribution in Unconstrained Optimization
SIAM Journal on Control and Optimization
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A Feature Selection Newton Method for Support Vector Machine Classification
Computational Optimization and Applications
Knowledge-Based Linear Programming
SIAM Journal on Optimization
Knowledge-Based Kernel Approximation
The Journal of Machine Learning Research
Minimization of SC1 functions and the Maratos effect
Operations Research Letters
The Interplay of Optimization and Machine Learning Research
The Journal of Machine Learning Research
Chunking for massive nonlinear kernel classification
Optimization Methods & Software
RV-SVM: An Efficient Method for Learning Ranking SVM
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Linear Programming Boosting by Column and Row Generation
DS '09 Proceedings of the 12th International Conference on Discovery Science
On the sparseness of 1-norm support vector machines
Neural Networks
A smoothing function for 1-norm support vector machines
ICNC'09 Proceedings of the 5th international conference on Natural computation
IEEE Transactions on Neural Networks
Sparse ensembles using weighted combination methods based on linear programming
Pattern Recognition
Time series gene expression data classification via L1-norm temporal SVM
PRIB'10 Proceedings of the 5th IAPR international conference on Pattern recognition in bioinformatics
The Journal of Machine Learning Research
Efficient large scale linear programming support vector machines
ECML'06 Proceedings of the 17th European conference on Machine Learning
Pedestrian detection in images via cascaded L1-norm minimization learning method
Pattern Recognition
1-Norm least squares twin support vector machines
Neurocomputing
An efficient method for learning nonlinear ranking SVM functions
Information Sciences: an International Journal
A general lp-norm support vector machine via mixed 0-1 programming
MLDM'12 Proceedings of the 8th international conference on Machine Learning and Data Mining in Pattern Recognition
Review: Supervised classification and mathematical optimization
Computers and Operations Research
Sparse high-dimensional fractional-norm support vector machine via DC programming
Computational Statistics & Data Analysis
A fast algorithm for kernel 1-norm support vector machines
Knowledge-Based Systems
Hi-index | 0.00 |
Support vector machines utilizing the 1-norm, typically set up as linear programs (Mangasarian, 2000; Bradley and Mangasarian, 1998), are formulated here as a completely unconstrained minimization of a convex differentiable piecewise-quadratic objective function in the dual space. The objective function, which has a Lipschitz continuous gradient and contains only one additional finite parameter, can be minimized by a generalized Newton method and leads to an exact solution of the support vector machine problem. The approach here is based on a formulation of a very general linear program as an unconstrained minimization problem and its application to support vector machine classification problems. The present approach which generalizes both (Mangasarian, 2004) and (Fung and Mangasarian, 2004) is also applied to nonlinear approximation where a minimal number of nonlinear kernel functions are utilized to approximate a function from a given number of function values.