A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
A hybrid optimization strategy for simplifying the solutions of support vector machines
Pattern Recognition Letters
A sequential algorithm for sparse support vector classifiers
Pattern Recognition
Hi-index | 0.01 |
To reduce computational cost, the discriminant function of a support vector machine (SVM) should be represented using as few vectors as possible. This problem has been tackled in different ways. In this article, we develop an explicit solution in the case of a general quadratic kernel k(x, x') = (C + D xTx')2. For a given number of vectors, this solution provides the best possible approximation and can even recover the discriminant function if the number of used vectors is large enough. The key idea is to express the inhomogeneous kernel as a homogeneous kernel on a space having one dimension more than the original one and to follow the approach of Burges (1996).