A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
The Informational Complexity of Learning: Perspectives on Neural Networks and Generative Grammar
The Informational Complexity of Learning: Perspectives on Neural Networks and Generative Grammar
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Support vector machines with different norms: motivation, formulations and results
Pattern Recognition Letters
The covering number in learning theory
Journal of Complexity
Support vector machines are universally consistent
Journal of Complexity
Support Vectors Selection by Linear Programming
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 5 - Volume 5
The Journal of Machine Learning Research
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
A note on different covering numbers in learning theory
Journal of Complexity
Are loss functions all the same?
Neural Computation
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
IEEE Transactions on Information Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Multi-kernel regularized classifiers
Journal of Complexity
Learning Coordinate Covariances via Gradients
The Journal of Machine Learning Research
Estimation of Gradients and Coordinate Covariation in Classification
The Journal of Machine Learning Research
Artificial Intelligence in Medicine
Learning rates for regularized classifiers using multivariate polynomial kernels
Journal of Complexity
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Learning Performance of Tikhonov Regularization Algorithm with Strongly Mixing Samples
ISNN '09 Proceedings of the 6th International Symposium on Neural Networks on Advances in Neural Networks
Classification with Gaussians and Convex Loss
The Journal of Machine Learning Research
Logistic classification with varying Gaussians
Computers & Mathematics with Applications
Least square regression with lp-coefficient regularization
Neural Computation
The consistency analysis of coefficient regularized classification with convex loss
WSEAS Transactions on Mathematics
Generalization bounds of ERM algorithm with V-geometrically Ergodic Markov chains
Advances in Computational Mathematics
Learning Rates for Regularized Classifiers Using Trigonometric Polynomial Kernels
Neural Processing Letters
Full length article: Support vector machines regression with l1-regularizer
Journal of Approximation Theory
Rule extraction from support vector machines based on consistent region covering reduction
Knowledge-Based Systems
Learning with coefficient-based regularization and ℓ1-penalty
Advances in Computational Mathematics
Guaranteed classification via regularized similarity learning
Neural Computation
Generalization Bounds of Regularization Algorithm with Gaussian Kernels
Neural Processing Letters
Hi-index | 0.00 |
Support vector machine (SVM) soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifiers are especially efficient for very large size samples. But little is known about their convergence, compared with the well-understood quadratic programming SVM classifier. In this article, we point out the difficulty and provide an error analysis. Our analysis shows that the convergence behavior of the linear programming SVM is almost the same as that of the quadratic programming SVM. This is implemented by setting a stepping-stone between the linear programming SVM and the classical 1-norm soft margin classifier. An upper bound for the misclassification error is presented for general probability distributions. Explicit learning rates are derived for deterministic and weakly separable distributions, and for distributions satisfying some Tsybakov noise condition.