Sparse Kernel SVMs via Cutting-Plane Training

  • Authors:
  • Thorsten Joachims;Chun-Nam John Yu

  • Affiliations:
  • Dept. of Computer Science, Cornell University, Ithaca, USA 14853;Dept. of Computer Science, Cornell University, Ithaca, USA 14853

  • Venue:
  • ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

While Support Vector Machines (SVMs) with kernels offer great flexibility and prediction performance on many application problems, their practical use is often hindered by the following two problems. Both problems can be traced back to the number of Support Vectors (SVs), which is known to generally grow linearly with the data set size [1]. First, training is slower than other methods and linear SVMs, where recent advances in training algorithms vastly improved training time. $h(x)={\rm sign} \left[\sum^{\#SV}_{i=1} \alpha_iK(x_i, x)\right]$ it is too expensive to evaluate in many applications when the number of SVs is large.