Learning curves for Gaussian process regression: approximations and bounds

  • Authors:
  • Peter Sollich;Anason Halees

  • Affiliations:
  • Department of Mathematics, King's College London, London WC2R 2LS, U.K.;Department of Mathematics, King's College London, London WC2R 2LS, U.K.

  • Venue:
  • Neural Computation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of calculating learning curves (i.e., average generalization performance) of gaussian processes used for regression. On the basis of a simple expression for the generalization error, in terms of the eigenvalue decomposition of the covariance function, we derive a number of approximation schemes. We identify where these become exact and compare with existing bounds on learning curves; the new approximations, which can be used for any input space dimension, generally get substantially closer to the truth. We also study possible improvements to our approximations. Finally, we use a simple exactly solvable learning scenario to show that there are limits of principle on the quality of approximations and bounds expressible solely in terms of the eigenvalue spectrum of the covariance function.