On the Stochastic Complexity of Learning Realizable and Unrealizable Rules

  • Authors:
  • Ronny Meir;Neri Merhav

  • Affiliations:
  • Department of Electrical Engineering, Technion, Haifa 32000, Israel. rmeir@ee.technion.ac.il;Department of Electrical Engineering, Technion, Haifa 32000, Israel

  • Venue:
  • Machine Learning
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

The problem of learning from examples in an average case setting is considered. Focusing on the stochastic complexity, an information theoretic quantity measuring the minimal description length of the data given a class of models, we find rigorous upper and lower bounds for this quantity under various conditions. For realizable problems, where the model class used is sufficiently rich to represent the function giving rise to the examples, we find tight upper and lower bounds for the stochastic complexity. In this case, bounds on the prediction error follow immediately using the methods of Haussler et al. (1994a). For unrealizable learning we find a tight upper bound only in the case of learning within a space of finite VC dimension. Moreover, we show in the latter case that the optimal method for prediction may not be the same as that for data compression, even in the limit of an infinite amount of training data, although the two problems (i.e. prediction and compression) are asymptotically equivalent in the realizable case. This result may bear consequences for many of the widely used model selection methods.