Bias and the probability of generalization

  • Authors:
  • Affiliations:
  • Venue:
  • IIS '97 Proceedings of the 1997 IASTED International Conference on Intelligent Information Systems (IIS '97)
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

In order to be useful, a learning algorithm must be able to generalize well when faced with inputs not previously presented to the system. A bias is necessary for any generalization, and as shown by several researchers in recent years, no bias can lead to strictly better generalization than any other when summed over all possible functions or applications. The paper provides examples to illustrate this fact, but also explains how a bias or learning algorithm can be "better" than another in practice when the probability of the occurrence of functions is taken into account. It shows how domain knowledge and an understanding of the conditions under which each learning algorithm performs well can be used to increase the probability of accurate generalization, and identifies several of the conditions that should be considered when attempting to select an appropriate bias for a particular problem.