Variants of iterative learning

  • Authors:
  • Steffen Lange;Gunter Grieser

  • Affiliations:
  • Deutsches Forschungszentrum für Künstliche Intelligenz, Stuhlsatzenhausweg 3, D-66123 Saarbrücken, Germany;Technische Universität Darmstadt, FB Informatik, Alexanderstraße 10, 64283 Darmstadt, Germany

  • Venue:
  • Theoretical Computer Science
  • Year:
  • 2003

Quantified Score

Hi-index 5.23

Visualization

Abstract

We investigate the principal learning capabilities of iterative learners in some more details. Thereby, we confine ourselves to study the learnability of indexable concept classes. The general scenario of iterative learning is as follows. An iterative learner successively takes as input one element of a text (an informant) for a target concept as well as its previously made hypothesis and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the target concept.We study two variants of this basic scenario and compare the learning capabilities of all resulting models of iterative learning to one another as well to the standard learning models finite inference, conservative identification, and learning in the limit.First, we consider the case that an iterative learner has to learn from fat texts (fat informants), only. In this setting, it is guaranteed that relevant information is, in principle, accessible at any time in the learning process. Second, we study a variant of iterative learning, where an iterative learner is supposed to learn no matter which initial hypothesis is actually chosen. This variant is suited to describe scenarios that are typical for case-based reasoning.