Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Polynomial-time inference of arbitrary pattern languages
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
The Power of Vacillation in Language Learning
SIAM Journal on Computing
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Variations on u-shaped learning
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Results on memory-limited U-shaped learning
Information and Computation
Information and Computation
Learning in Friedberg numberings
Information and Computation
U-shaped, iterative, and iterative-with-counter learning
Machine Learning
Learning in Friedberg Numberings
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Memory-limited non-U-shaped learning with solved open problems
Theoretical Computer Science
Hi-index | 0.00 |
The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of algorithmic learning? Returning to wrong conjectures complements the paradigm of U-shaped learning when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit from positive data: explanatory learning (when a learner stabilizes in the limit on a correct grammar) and behaviourally correct learning (when a learner stabilizes in the limit on a sequence of correct grammars representing the target concept). In both cases we show that returning to wrong conjectures is necessary to achieve full learning power. In contrast, one can modify learners (without losing learning power) such that they never show inverted U-shaped learning behaviour, that is, never return to old wrong conjecture with a correct conjecture in-between. Furthermore, one can also modify a learner (without losing learning power) such that it does not return to old "overinclusive" conjectures containing non-elements of the target language. We also consider our problem in the context of vacillatory learning (when a learner stabilizes on a finite number of correct grammars) and show that each of the following four constraints is restrictive (that is, reduces learning power): the learner does not return to old wrong conjectures; the learner is not inverted U-shaped; the learner does not return to old overinclusive conjectures; the learner does not return to old overgeneralizing conjectures. We also show that learners that are consistent with the input seen so far can be made decisive: on any text, they do not return to any old conjectures--wrong or right.