Sparseness of support vector machines
The Journal of Machine Learning Research
A Direct Method for Building Sparse Kernel Learning Algorithms
The Journal of Machine Learning Research
Sparse kernel SVMs via cutting-plane training
Machine Learning
Towards minimizing the annotation cost of certified text classification
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
While Support Vector Machines (SVMs) with kernels offer great flexibility and prediction performance on many application problems, their practical use is often hindered by the following two problems. Both problems can be traced back to the number of Support Vectors (SVs), which is known to generally grow linearly with the data set size [1]. First, training is slower than other methods and linear SVMs, where recent advances in training algorithms vastly improved training time. $h(x)={\rm sign} \left[\sum^{\#SV}_{i=1} \alpha_iK(x_i, x)\right]$ it is too expensive to evaluate in many applications when the number of SVs is large.