Variations on U-shaped learning

  • Authors:
  • Lorenzo Carlucci;Sanjay Jain;Efim Kinber;Frank Stephan

  • Affiliations:
  • Department of Computer and Information Sciences, University of Delaware, Newark, DE and Dipartimento di Matematica, Università di Siena, Siena, Italy;School of Computing, National University of Singapore, Singapore, Republic of Singapore;Department of Computer Science, Sacred Heart University, Fairfield, CT;School of Computing and Department of Mathematics, National University of Singapore, Singapore, Republic of Singapore

  • Venue:
  • Information and Computation
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of algorithmic learning? Returning to wrong conjectures complements the paradigm of U-shaped learning when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit from positive data: explanatory learning (when a learner stabilizes in the limit on a correct grammar) and behaviourally correct learning (when a learner stabilizes in the limit on a sequence of correct grammars representing the target concept). In both cases we show that returning to wrong conjectures is necessary to achieve full learning power. In contrast, one can modify learners (without losing learning power) such that they never show inverted U-shaped learning behaviour, that is, never return to old wrong conjecture with a correct conjecture in-between. Furthermore, one can also modify a learner (without losing learning power) such that it does not return to old "overinclusive" conjectures containing non-elements of the target language. We also consider our problem in the context of vacillatory learning (when a learner stabilizes on a finite number of correct grammars) and show that each of the following four constraints is restrictive (that is, reduces learning power): the learner does not return to old wrong conjectures; the learner is not inverted U-shaped; the learner does not return to old overinclusive conjectures; the learner does not return to old overgeneralizing conjectures. We also show that learners that are consistent with the input seen so far can be made decisive: on any text, they do not return to any old conjectures--wrong or right.