Leave-one-out support vector machines

  • Authors:
  • Jason Weston

  • Affiliations:
  • Department of Computer Science, Royal Holloway, University of London, Egham, Surrey, UK

  • Venue:
  • IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new learning algorithm for pattern recognition inspired by a recent upper bound on leave-one-out error [Jaakkola and Haussler, 1999] proved for Support Vector Machines (SVMs) [Vapnik, 1995; 1998]. The new approach directly minimizes the expression given by the bound in an attempt to minimize leave-one-out error. This gives a convex optimization problem which constructs a sparse linear classifier in feature space using the kernel technique. As such the algorithm possesses many of the same properties as SVMs. The main novelty of the algorithm is that apart from the choice of kernel, it is parameterless - the selection of the number of training errors is inherent in the algorithm and not chosen by an extra free parameter as in SVMs. First experiments using the method on benchmark datasets from the UCI repository show results similar to SVMs which have been tuned to have the best choice of parameter.