A note on the decomposition methods for support vector regression
Neural Computation
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
Efficient Learning of Label Ranking by Soft Projections onto Polyhedra
The Journal of Machine Learning Research
General Polynomial Time Decomposition Algorithms
The Journal of Machine Learning Research
On the complexity of working set selection
Theoretical Computer Science
A convergent decomposition algorithm for support vector machines
Computational Optimization and Applications
SVM optimization: inverse dependence on training set size
Proceedings of the 25th international conference on Machine learning
A sequential dual method for large scale multi-class linear svms
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Convergence of a new decomposition algorithm for support vector machines
ICIC'06 Proceedings of the 2006 international conference on Intelligent computing: Part II
The Journal of Machine Learning Research
An Iterative Method for Deciding SVM and Single Layer Neural Network Structures
Neural Processing Letters
Cross-validation, bootstrap, and support vector machines
Advances in Artificial Neural Systems
Training support vector machines via SMO-type decomposition methods
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
General polynomial time decomposition algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
In a previous paper, the author (2001) proved the convergence of a commonly used decomposition method for support vector machines (SVMs). However, there is no theoretical justification about its stopping criterion, which is based on the gap of the violation of the optimality condition. It is essential to have the gap asymptotically approach zero, so we are sure that existing implementations stop in a finite number of iterations after reaching a specified tolerance. Here, we prove this result and illustrate it by two extensions: ν-SVM and a multiclass SVM by Crammer and Singer (2001). A further result shows that, in final iterations of the decomposition method, only a particular set of variables are still being modified. This supports the use of the shrinking and caching techniques in some existing implementations. Finally, we prove the asymptotic convergence of a decomposition method for this multiclass SVM. Discussions on the difference between this convergence proof and the one in another paper by Lin are also included.