A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Fast Approximation Algorithms for the Knapsack and Sum of Subset Problems
Journal of the ACM (JACM)
A Simple Decomposition Method for Support Vector Machines
Machine Learning
Convergence of a Generalized SMO Algorithm for SVM Classifier Design
Machine Learning
Polynomial-Time Decomposition Algorithms for Support Vector Machines
Machine Learning
A note on the decomposition methods for support vector regression
Neural Computation
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Lagrangian support vector machines
The Journal of Machine Learning Research
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
Training support vector machines via SMO-type decomposition methods
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
General polynomial time decomposition algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
The analysis of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
Asymptotic convergence of an SMO algorithm without any assumptions
IEEE Transactions on Neural Networks
A formal analysis of stopping criteria of decomposition methods for support vector machines
IEEE Transactions on Neural Networks
A study on SMO-type decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Candidate working set strategy based SMO algorithm in support vector machine
Information Processing and Management: an International Journal
Minimizing the error of linear separators on linearly inseparable data
Discrete Applied Mathematics
Hi-index | 5.23 |
The decomposition method is currently one of the major methods for solving the convex quadratic optimization problems being associated with Support Vector Machines (SVM-optimization). A key issue in this approach is the policy for working set selection. We would like to find policies that realize (as well as possible) three goals simultaneously: ''(fast) convergence to an optimal solution'', ''efficient procedures for working set selection'', and ''a high degree of generality'' (including typical variants of SVM-optimization as special cases). In this paper, we study a general policy for working set selection that has been proposed in [Nikolas List, Hans Ulrich Simon, A general convergence theorem for the decomposition method, in: Proceedings of the 17th Annual Conference on Computational Learning Theory, 2004, pp. 363-377] and further analyzed in [Nikolas List, Hans Ulrich Simon, General polynomial time decomposition algorithms, in: Proceedings of the 17th Annual Conference on Computational Learning Theory, 2005, pp. 308-322]. It is known that it efficiently approaches feasible solutions with minimum cost for any convex quadratic optimization problem. Here, we investigate its computational complexity when it is used for SVM-optimization. It turns out that, for a variable size of the working set, the general policy poses an NP-hard working set selection problem. But a slight variation of it (sharing the convergence properties with the original policy) can be solved in polynomial time. For working sets of fixed size 2, the situation is even better. In this case, the general policy coincides with the ''rate certifying pair approach'' (introduced by Hush and Scovel). We show that maximum rate certifying pairs can be found in linear time, which leads to a quite efficient decomposition method with a polynomial convergence rate for SVM-optimization.