The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Duality and Geometry in SVM Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
On the Equivalence of the SMO and MDM Algorithms for SVM Training
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
On the generalization of soft margin algorithms
IEEE Transactions on Information Theory
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
Asymptotic convergence of an SMO algorithm without any assumptions
IEEE Transactions on Neural Networks
A study on SMO-type decomposition methods for support vector machines
IEEE Transactions on Neural Networks
A common framework for the convergence of the GSK, MDM and SMO algorithms
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Hi-index | 0.00 |
We give a new proof of the convergence of the SMO algorithm for SVM training over linearly separable problems that partly builds on the one by Mitchell et al. for the convergence of the MDM algorithm to find the point of a convex set closest to the origin. Our proof relies in a simple derivation of SMO that we also present here and, while less general, it is considerably simpler than previous ones and yields algorithmic insights into the working of SMO.