Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Convergence of a Generalized SMO Algorithm for SVM Classifier Design
Machine Learning
Interior-Point Methods for Massive Support Vector Machines
SIAM Journal on Optimization
Polynomial-Time Decomposition Algorithms for Support Vector Machines
Machine Learning
SMO algorithm for least-squares SVM formulations
Neural Computation
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
Maximum-Gain Working Set Selection for SVMs
The Journal of Machine Learning Research
An Efficient Implementation of an Active Set Method for SVMs
The Journal of Machine Learning Research
A coordinate gradient descent method for nonsmooth separable minimization
Mathematical Programming: Series A and B
A convergent hybrid decomposition algorithm model for SVM training
IEEE Transactions on Neural Networks
General polynomial time decomposition algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
The analysis of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
Asymptotic convergence of an SMO algorithm without any assumptions
IEEE Transactions on Neural Networks
A study on SMO-type decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Accelerated Block-coordinate Relaxation for Regularized Optimization
SIAM Journal on Optimization
Review: Supervised classification and mathematical optimization
Computers and Operations Research
The Journal of Machine Learning Research
Computational Optimization and Applications
Hi-index | 0.00 |
Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar's notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.