Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
SMO algorithm for least-squares SVM formulations
Neural Computation
Benchmarking Least Squares Support Vector Machine Classifiers
Machine Learning
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
On the Equivalence of the SMO and MDM Algorithms for SVM Training
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
Cycle-breaking acceleration of SVM training
Neurocomputing
NESVM: A Fast Gradient Method for Support Vector Machines
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
A study on SMO-type decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Momentum acceleration of least-squares support vector machines
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Improved conjugate gradient implementation for least squares support vector machines
Pattern Recognition Letters
Non-sparse multiple kernel fisher discriminant analysis
The Journal of Machine Learning Research
Accelerating FCM neural network classifier using graphics processing units with CUDA
Applied Intelligence
Hi-index | 0.00 |
Least squares support vector machine (LS-SVM) classifiers have been traditionally trained with conjugate gradient algorithms. In this work, completing the study by Keerthi et al., we explore the applicability of the SMO algorithm for solving the LS-SVM problem, by comparing First Order and Second Order working set selections concentrating on the RBF kernel, which is the most usual choice in practice. It turns out that, considering all the range of possible values of the hyperparameters, Second Order working set selection is altogether more convenient than First Order. In any case, whichever the selection scheme is, the number of kernel operations performed by SMO appears to scale quadratically with the number of patterns. Moreover, asymptotic convergence to the optimum is proved and the rate of convergence is shown to be linear for both selections.