Variations on u-shaped learning

  • Authors:
  • Lorenzo Carlucci;Sanjay Jain;Efim Kinber;Frank Stephan

  • Affiliations:
  • Department of Computer and Information Sciences, University of Delaware, Newark, DE;School of Computing, National University of Singapore, Singapore;Department of Computer Science, Sacred Heart University, Fairfield, CT;School of Computing and Department of Mathematics, National University of Singapore, Singapore

  • Venue:
  • COLT'05 Proceedings of the 18th annual conference on Learning Theory
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of learning? Returning to wrong conjectures complements the paradigm of U-shaped learning [2,6,8,20,24] when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit: TxtEx-learning – when a learner stabilizes on a correct conjecture, and TxtBc-learning – when a learner stabilizes on a sequence of grammars representing the target concept. In all cases, we show that, surprisingly, returning to wrong conjectures is sometimes necessary to achieve full power of learning. On the other hand it is not necessary to return to old “overgeneralizing” conjectures containing elements not belonging to the target language. We also consider our problem in the context of so-called vacillatory learning when a learner stabilizes to a finite number of correct grammars. In this case we show that both returning to old wrong conjectures and returning to old “overgeneralizing” conjectures is necessary for full learning power. We also show that, surprisingly, learners consistent with the input seen so far can be made decisive [2,21] – they do not have to return to any old conjectures – wrong or right.