Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
Inductive inference from all positive and some negative data
Information Processing Letters
Types of monotonic language learning and their characterization
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Language learning with some negative information
Journal of Computer and System Sciences
Incremental learning from positive data
Journal of Computer and System Sciences
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
The Power of Pluralism for Automatic Program Synthesis
Journal of the ACM (JACM)
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Machine Learning
Machine Learning
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
A Thesis in Inductive Inference
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic
Learning languages from positive data and a finite number of queries
Information and Computation
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Learning languages from positive data and negative counterexamples
Journal of Computer and System Sciences
U-shaped, iterative, and iterative-with-counter learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
Memory-limited u-shaped learning
COLT'06 Proceedings of the 19th annual conference on Learning Theory
One-shot learners using negative counterexamples and nearest positive examples
Theoretical Computer Science
Iterative learning from texts and counterexamples using additional information
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Hi-index | 0.00 |
A model for learning in the limit is defined where a (so-called iterative) learner gets all positive examples from the target language, tests every new conjecture with a teacher (oracle) if it is a subset of the target language (and if it is not, then it receives a negative counterexample), and uses only limited long-term memory (incorporated in conjectures). Three variants of this model are compared: when a learner receives least negative counterexamples, the ones whose size is bounded by the maximum size of input seen so far, and arbitrary ones. A surprising result is that sometimes absence of bounded counterexamples can help an iterative learner whereas arbitrary counterexamples are useless. We also compare our learnability model with other relevant models of learnability in the limit, study how our model works for indexed classes of recursive languages, and show that learners in our model can work in non-U-shaped way-never abandoning the first right conjecture.