Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Incremental Support Vector Learning: Analysis, Implementation and Applications
The Journal of Machine Learning Research
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Convergence improvement of active set training for support vector regressors
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Hi-index | 0.00 |
In this paper we discuss training support vector machines (SVMs) repetitively solving a set of linear equations, which is an extension of accurate incremental training proposed by Cauwenberghs and Poggio. First, we select two data in different classes and determine the optimal separating hyperplane. Then, we divide the training data set into several chunk data sets and set the two data to the active set, which includes current and previous support vectors. For the combined set of the active set and a chunk data set, we detect the datum that maximally violates the Karush-Kuhn-Tacker (KKT) conditions, and modify the optimal hyperplane by solving a set of equations that constrain the margins of unbounded support vectors and the optimality of the bias term. We iterate this procedure until there are no violating data in any combined set. By the computer experiment using several benchmark data sets, we show that the training speed of the proposed method is comparable with the primal-dual interior-point method combined with the decomposition technique, usually with a smaller number of support vectors.