Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
Inductive inference from all positive and some negative data
Information Processing Letters
Language learning with some negative information
Journal of Computer and System Sciences
Incremental learning from positive data
Journal of Computer and System Sciences
Machine Learning
Machine Learning
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Learning languages from positive data and negative counterexamples
Journal of Computer and System Sciences
Hi-index | 0.00 |
A model for learning in the limit is defined where a (so-called iterative) learner gets all positive examples from the target language, tests every new conjecture with a teacher (oracle) if it is a subset of the target language (and if it is not, then it receives a negative counterexample), and uses only limited long-term memory (incorporated in conjectures). Three variants of this model are compared: when a learner receives least negative counterexamples, the ones whose size is bounded by the maximum size of input seen so far, and arbitrary ones. We also compare our learnability model with other relevant models of learnability in the limit, study how our model works for indexed classes of recursive languages, and show that learners in our model can work in non-U-shaped way — never abandoning the first right conjecture.