The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Polynomial-Time Decomposition Algorithms for Support Vector Machines
Machine Learning
Duality and Geometry in SVM Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Training ν-Support Vector Classifiers: Theory and Algorithms
Neural Computation
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
A general soft method for learning SVM classifiers with L1-norm penalty
Pattern Recognition
Second-order smo improves svm online and active learning
Neural Computation
Simple solvers for large quadratic programming tasks
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
On the generalization of soft margin algorithms
IEEE Transactions on Information Theory
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
A Simple Proof of the Convergence of the SMO Algorithm for Linearly Separable Problems
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
First and Second Order SMO Algorithms for LS-SVM Classifiers
Neural Processing Letters
Improved conjugate gradient implementation for least squares support vector machines
Pattern Recognition Letters
Hi-index | 0.00 |
SVM training is usually discussed under two different algorithmic points of view. The first one is provided by decomposition methods such as SMO and SVMLight while the second one encompasses geometric methods that try to solve a Nearest Point Problem (NPP), the Gilbert---Schlesinger---Kozinec (GSK) and Mitchell---Demyanov---Malozemov (MDM) algorithms being the most representative ones. In this work we will show that, indeed, both approaches are essentially coincident. More precisely, we will show that a slight modification of SMO in which at each iteration both updating multipliers correspond to patterns in the same class solves NPP and, moreover, that this modification coincides with an extended MDM algorithm. Besides this, we also propose a new way to apply the MDM algorithm for NPP problems over reduced convex hulls.