Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Accurate on-line support vector regression
Neural Computation
An Efficient Implementation of an Active Set Method for SVMs
The Journal of Machine Learning Research
Batch Support Vector Training Based on Exact Incremental Training
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Implementation Issues of an Incremental and Decremental SVM
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
An efficient active set method for SVM training without singular inner problems
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Active set support vector regression
IEEE Transactions on Neural Networks
Incremental training of support vector machines
IEEE Transactions on Neural Networks
Feature extraction using support vector machines
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Hi-index | 0.00 |
In our previous work we have discussed the training method of a support vector regressor (SVR) by active set training based on Newton's method. In this paper, we discuss convergence improvement by modifying the training method. To stabilize convergence for a large epsilon tube, we calculate the bias term according to the signs of the previous variables, not the updated variables. And to speed up calculating the inverse matrix by the Cholesky factorization during iteration steps, at the first iteration step, we keep the factorized matrix. And at the subsequent steps we restart the Cholesky factorization at the point where the variable in the working set is replaced. By computer experiments we show that by the proposed method the convergence is stabilized for a large epsilon tube and the incremental Cholesky factorization speeds up training.