Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Learning via queries to an oracle
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Inductive inference from all positive and some negative data
Information Processing Letters
Language learning with some negative information
Journal of Computer and System Sciences
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Machine Learning
Machine Learning
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Iterative learning from positive data and negative counterexamples
Information and Computation
Learning languages from positive data and a limited number of short counterexamples
Theoretical Computer Science
One-Shot Learners Using Negative Counterexamples and Nearest Positive Examples
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
One-shot learners using negative counterexamples and nearest positive examples
Theoretical Computer Science
Iterative learning from texts and counterexamples using additional information
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Iterative learning from positive data and negative counterexamples
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
On learning languages from positive data and a limited number of short counterexamples
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Automatic learning from positive data and negative counterexamples
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
In this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative counterexamples and on the time for obtaining them. In particular, we consider the models where (1) a learner gets the least negative counterexample; (2) the size of a negative counterexample must be bounded by the size of the positive data seen so far; (3) a counterexample can be delayed. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely (but finitely) long time (or are only told that their latest conjecture is not a subset of the target language, without any specific negative counterexample). Another result shows that for behaviorally correct learning (where semantic convergence is required from a learner) with negative counterexamples, a learner making just one error in almost all its conjectures has the ''ultimate power'': it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative counterexamples provided by a teacher are not enough to compensate for full positive and negative data.