Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
On-Line Support Vector Machine Regression
ECML '02 Proceedings of the 13th European Conference on Machine Learning
The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
A tutorial on ν-support vector machines: Research Articles
Applied Stochastic Models in Business and Industry - Statistical Learning
Training ν-Support Vector Classifiers: Theory and Algorithms
Neural Computation
Neural Computation
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Incremental Support Vector Learning: Analysis, Implementation and Applications
The Journal of Machine Learning Research
A kernel path algorithm for support vector machines
Proceedings of the 24th international conference on Machine learning
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The @n-Support Vector Machine (@n-SVM) for classification proposed by Scholkopf et al. has the advantage of using a parameter @n on controlling the number of support vectors and margin errors. However, comparing to standard C-Support Vector Machine (C-SVM), its formulation is more complicated, up until now there are no effective methods on solving accurate on-line learning for it. In this paper, we propose a new effective accurate on-line algorithm which is designed based on a modified formulation of the original @n-SVM. The accurate on-line algorithm includes two special steps: the first one is relaxed adiabatic incremental adjustments; the second one is strict restoration adjustments. The experiments on several benchmark datasets demonstrate that using these two steps the accurate on-line algorithm can avoid the infeasible updating path as far as possible, and successfully converge to the optimal solution. It achieves the fast convergence especially on the Gaussian kernel and is faster than the batch algorithm.