Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Polynomial-time inference of arbitrary pattern languages
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
The Power of Vacillation in Language Learning
SIAM Journal on Computing
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
ICALP '00 Proceedings of the 27th International Colloquium on Automata, Languages and Programming
Variations on U-shaped learning
Information and Computation
Memory-limited u-shaped learning
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Some recent results in u-shaped learning
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
Hi-index | 0.00 |
The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of learning? Returning to wrong conjectures complements the paradigm of U-shaped learning [2,6,8,20,24] when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit: TxtEx-learning – when a learner stabilizes on a correct conjecture, and TxtBc-learning – when a learner stabilizes on a sequence of grammars representing the target concept. In all cases, we show that, surprisingly, returning to wrong conjectures is sometimes necessary to achieve full power of learning. On the other hand it is not necessary to return to old “overgeneralizing” conjectures containing elements not belonging to the target language. We also consider our problem in the context of so-called vacillatory learning when a learner stabilizes to a finite number of correct grammars. In this case we show that both returning to old wrong conjectures and returning to old “overgeneralizing” conjectures is necessary for full learning power. We also show that, surprisingly, learners consistent with the input seen so far can be made decisive [2,21] – they do not have to return to any old conjectures – wrong or right.