Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Efficient SVM Regression Training with SMO
Machine Learning
A Simple Decomposition Method for Support Vector Machines
Machine Learning
A note on the decomposition methods for support vector regression
Neural Computation
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
Global Convergence of Decomposition Learning Methods for Support Vector Machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Decomposition method has been widely used to efficiently solve the large size quadratic programming (QP) problems arising in support vector regression (SVR). In a decomposition method, a large QP problem is decomposed into a series of smaller QP subproblems, which can be solved much faster than the original one. In this paper, we analyze the global convergence of decomposition methods for SVR. We will show the decomposition methods for the convex programming problem formulated by Flake and Lawrence always stop within a finite number of iterations.