A PAC-Bayesian margin bound for linear classifiers

  • Authors:
  • R. Herbrich;T. Graepel

  • Affiliations:
  • Dept. of Stat. & Bus. Math., Technische Univ. Berlin;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

We present a bound on the generalization error of linear classifiers in terms of a refined margin quantity on the training sample. The result is obtained in a probably approximately correct (PAC)-Bayesian framework and is based on geometrical arguments in the space of linear classifiers. The new bound constitutes an exponential improvement of the so far tightest margin bound, which was developed in the luckiness framework, and scales logarithmically in the inverse margin. Even in the case of less training examples than input dimensions sufficiently large margins lead to nontrivial bound values and-for maximum margins-to a vanishing complexity term. In contrast to previous results, however, the new bound does depend on the dimensionality of feature space. The analysis shows that the classical margin is too coarse a measure for the essential quantity that controls the generalization error: the fraction of hypothesis space consistent with the training sample. The practical relevance of the result lies in the fact that the well-known support vector machine is optimal with respect to the new bound only if the feature vectors in the training sample are all of the same length. As a consequence, we recommend to use support vector machines (SVMs) on normalized feature vectors only. Numerical simulations support this recommendation and demonstrate that the new error bound can be used for the purpose of model selection.