Making large-scale support vector machine learning practical
Advances in kernel methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
A convergent hybrid decomposition algorithm model for SVM training
IEEE Transactions on Neural Networks
Fast support vector training by Newton's method
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Hi-index | 0.00 |
Second order SMO represents the state-of-the-art in SVM training for moderate size problems. In it, the solution is attained by solving a series of subproblems which are optimized w.r.t just a pair of multipliers. In this paper we will illustrate how SMO works in a two stage fashion, setting first the values of the bounded multipliers to the penalty factor C and proceeding then to adjust the non-bounded multipliers. Furthermore, during this second stage the selected pairs for update often appear repeatedly during the algorithm. Taking advantage of this, we shall propose a procedure to combine previously used descent directions that results in much fewer iterations in this second stage and that may also lead to noticeable savings in kernel operations.