Does extra knowledge necessarily improve generalization?

  • Authors:
  • David Barber;David Saad

  • Affiliations:
  • Department of Physics, University of Edinburgh, Edinburgh EH9 3JZ, UK;Department of Physics, University of Edinburgh, Edinburgh EH9 3JZ, UK

  • Venue:
  • Neural Computation
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

The generalization error is a widely used performance measure employed in the analysis of adaptive learning systems. This measure is generally critically dependent on the knowledge that the system is given about the problem it is trying to learn. In this paper we examine to what extent it is necessarily the case that an increase in the knowledge that the system has about the problem will reduce the generalization error. Using the standard definition of the generalization error, we present simple cases for which the intuitive idea of “reducivity”---that more knowledge will improve generalization---does not hold. Under a simple approximation, however, we find conditions to satisfy “reducivity.” Finally, we calculate the effect of a specific constraint on the generalization error of the linear perceptron, in which the signs of the weight components are fixed. This particular restriction results in a significant improvement in generalization performance.