Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Support Vector Machines for Pattern Classification (Advances in Pattern Recognition)
Incremental Support Vector Learning: Analysis, Implementation and Applications
The Journal of Machine Learning Research
Sparse least squares support vector training in the reduced empirical feature space
Pattern Analysis & Applications
Batch Support Vector Training Based on Exact Incremental Training
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
Convergence improvement of active set training for support vector regressors
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Fast support vector training by Newton's method
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Hi-index | 0.00 |
Chapelle proposed to train support vector machines (SVMs) in the primal form by Newton's method and discussed its advantages. In this paper we propose training L2 SVMs in the dual form in the similar way that Chapelle proposed. Namely, we solve the quadratic programming problem for the initial working set of training data by Newton's method, delete from the working set the data with negative Lagrange multipliers as well as the data with the associated margins larger than or equal to 1, add to the working set training data with the associated margins less than 1, and repeat training the SVM until the working set does not change. The matrix associated with the dual quadratic form is positive definite while that with the primal quadratic form is positive semi-definite. And the former matrix requires less kernel evaluation. Computer experiments show that for most cases training the SVM by the proposed method is more stable and faster than training the SVM in the primal.