The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
The Relaxed Online Maximum Margin Algorithm
Machine Learning
The huller: a simple and efficient online SVM
ECML'05 Proceedings of the 16th European conference on Machine Learning
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
A geometric approach to Support Vector Machine (SVM) classification
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Recently it has been shown that appropriate perceptron training methods, such as the Schlesinger-Kozinec (SK) algorithm, can provide maximal margin hyperplanes with training costs O(N ×T), with N denoting sample size and T the number of training iterations. In this work we shall relate SK training with the classical Rosenblatt rule and show that, when the hyperplane vector is written in dual form, the support vector (SV) coefficients determine their training appearance frequency; in particular, large coefficient SVs penalize training costs. Under this light we shall explore a training acceleration procedure in which large coefficient and, hence, large cost SVs are removed from training and that allows for a further stable large sample shrinking. As we shall see, this results in a much faster training while not penalizing test classification.