Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
SMO algorithm for least-squares SVM formulations
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Cycle-breaking acceleration of SVM training
Neurocomputing
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
First and Second Order SMO Algorithms for LS-SVM Classifiers
Neural Processing Letters
An improved conjugate gradient scheme to the solution of least squares SVM
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Least-Squares Support Vector Machines (LS-SVMs) have been a successful alternative model for classification and regression Support Vector Machines (SVMs), and used in a wide range of applications. In spite of this, only a limited effort has been realized to design efficient algorithms for the training of this class of models, in clear contrast to the vast amount of contributions of this kind in the field of classic SVMs. In this work we propose to combine the popular Sequential Minimal Optimization (SMO) method with a momentum strategy that manages to reduce the number of iterations required for convergence, while requiring little additional computational effort per iteration, especially in those situations where the standard SMO algorithm for LS-SVMs fails to obtain fast solutions.