Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Using analytic QP and sparseness to speed training of support vector machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
SMO algorithm for least-squares SVM formulations
Neural Computation
The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Maximum-Gain Working Set Selection for SVMs
The Journal of Machine Learning Research
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
SMO-based pruning methods for sparse least squares support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this paper, we apply Sequential Unconstrained Minimization Techniques (SUMTs) to the classical formulations of both the classical L1 norm SVM and the least squares SVM. We show that each can be solved as a sequence of unconstrained optimization problems with only box constraints. We propose relaxed SVM and relaxed LSSVM formulations that correspond to a single problem in the corresponding SUMT sequence. We also propose a SMO like algorithm to solve the relaxed formulations that works by updating individual Lagrange multipliers. The methods yield comparable or better results on large benchmark datasets than classical SVM and LSSVM formulations, at substantially higher speeds.