Fast rates for support vector machines

  • Authors:
  • Ingo Steinwart;Clint Scovel

  • Affiliations:
  • CCS-3, Los Alamos National Laboratory, Los Alamos, NM;CCS-3, Los Alamos National Laboratory, Los Alamos, NM

  • Venue:
  • COLT'05 Proceedings of the 18th annual conference on Learning Theory
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We establish learning rates to the Bayes risk for support vector machines (SVMs) using a regularization sequence ${\it \lambda}_{n}={\it n}^{-\rm \alpha}$, where ${\it \alpha}\in$(0,1) is arbitrary. Under a noise condition recently proposed by Tsybakov these rates can become faster than n−1/2. In order to deal with the approximation error we present a general concept called the approximation error function which describes how well the infinite sample versions of the considered SVMs approximate the data-generating distribution. In addition we discuss in some detail the relation between the “classical” approximation error and the approximation error function. Finally, for distributions satisfying a geometric noise assumption we establish some learning rates when the used RKHS is a Sobolev space.