The nature of statistical learning theory
The nature of statistical learning theory
Combining support vector and mathematical programming methods for classification
Advances in kernel methods
Pattern Classification: Neuro-Fuzzy Methods and Their Comparison
Pattern Classification: Neuro-Fuzzy Methods and Their Comparison
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Convergence of a Generalized SMO Algorithm for SVM Classifier Design
Machine Learning
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Decomposition techniques are used to speed up training support vector machines but for linear programming support vector machines (LP-SVMs) direct implementation of decomposition techniques leads to infinite loops. To solve this problem and to further speed up training, in this paper, we propose an improved decomposition techniques for training LP-SVMs. If an infinite loop is detected, we include in the next working set all the data in the working sets that form the infinite loop. To further accelerate training, we improve a working set selection strategy: at each iteration step, we check the number of violations of complementarity conditions and constraints. If the number of violations increases, we conclude that the important data are removed from the working set and restore the data into the working set. The computer experiments demonstrate that training by the proposed decomposition technique with improved working set selection is drastically faster than that without using the decomposition technique. Furthermore, it is always faster than that without improving the working set selection for all the cases tested.