On Relative Loss Bounds in Generalized Linear Regression

  • Authors:
  • Jürgen Forster

  • Affiliations:
  • -

  • Venue:
  • FCT '99 Proceedings of the 12th International Symposium on Fundamentals of Computation Theory
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

When relative loss bounds are considered, an on-line learning algorithm is compared to the performance of a class of off-line algorithms, called experts. In this paper we reconsider a result by Vovk, namely an upper bound on the on-line relative loss for linear regression with square loss - here the experts are linear functions. We give a shorter and simpler proof of Vovk's result and give a new motivation for the choice of the predictions of Vovk's learning algorithm. This is done by calculating the, in some sense, best prediction for the last trial of a sequence of trials when it is known that the outcome variable is bounded. We try to generalize these ideas to the case of generalized linear regression where the experts are neurons and give a formula for the "best" prediction for the last trial in this case, too. This prediction turns out to be essentially an integral over the "best" expert applied to the last instance. Predictions that are "optimal" in this sense might be good predictions for long sequences of trials as well.