The nature of statistical learning theory
The nature of statistical learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
AI Game Programming Wisdom
Advances in Large Margin Classifiers
Advances in Large Margin Classifiers
An asymptotic statistical theory of polynomial kernel methods
Neural Computation
Geometrical Properties of Nu Support Vector Machines with Different Norms
Neural Computation
Effects of kernel function on Nu support vector machines in extreme cases
IEEE Transactions on Neural Networks
An Incremental Feature Learning Algorithm Based on Least Square Support Vector Machine
FAW '08 Proceedings of the 2nd annual international workshop on Frontiers in Algorithmics
A Max-Margin Learning Algorithm with Additional Features
FAW '09 Proceedings of the 3d International Workshop on Frontiers in Algorithmics
A Large Margin Classifier with Additional Features
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.01 |
Support vector machines (SVMs) are known to result in a quadratic programming problem, that requires a large computational complexity. To reduce it, this paper considers, from the geometrical point of view, two incremental or iterative SVMs with homogeneous hyperplanes. One method is shown to produce the same solution as an SVM in batch mode with the linear complexity on average, utilizing the fact that only effective examples are necessary and sufficient for the solution. The other, which stores the set of support vectors instead of effective examples, is quantitatively shown to have a lower performance although implementation is rather easy.