Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Co-learning of total recursive functions
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
A learning-theoretic characterization of classes of recursive functions
Information Processing Letters
A recursive introduction to the theory of computation
A recursive introduction to the theory of computation
The Power of Pluralism for Automatic Program Synthesis
Journal of the ACM (JACM)
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Characterization Problems in the Theory of Inductive Inference
Proceedings of the Fifth Colloquium on Automata, Languages and Programming
Co-Learning of Recursive Languages from Positive Data
Proceedings of the Second International Andrei Ershov Memorial Conference on Perspectives of System Informatics
Co-learnability and FIN-identifiability of Enumerable Classes of Total Recursive Functions
AII '94 Proceedings of the 4th International Workshop on Analogical and Inductive Inference: Algorithmic Learning Theory
On Learning and Co-learning of Minimal Programs
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
Inductive Inference of Recursive Functions: Qualitative Theory
Baltic Computer Science, Selected Papers
Learning by Erasing in Dynamic Epistemic Logic
LATA '09 Proceedings of the 3rd International Conference on Language and Automata Theory and Applications
Hi-index | 0.01 |
Elimination of potential hypotheses is a fundamental component of many learning processes. In order to understand the nature of elimination, herein we study the following model of learning recursive functions from examples. On any target function, the learning machine has to eliminate all, save one, possible hypotheses such that the missing one correctly describes the target function. It turns out that this type of learning by the process of elimination (elm-learning, for short) can be stronger, weaker or of the same power as usual Gold style learning.While for usual learning any r.e. class of recursive functions can be learned in all of its numberings, this is no longer true for elm-learning. For elm-learnability of an r.e. class in a given of its numberings, we derive sufficient conditions of this numbering (decidability of index equivalence and paddability) as well as a condition being both necessary and sufficient. Then we deal with the problem of which r.e. classes are elm-learnable in all of their numberings and which are not.Elm-learning of arbitrary classes of recursive function is shown to be of the same power as usual learning. For elm-learnability of an arbitrary class in an arbitrary numbering, paddability of this numbering remains to be useful, whereas decidability of index equivalence can be "maximally weak" or "extremely useful". We also give a characterization for elm-learnability of an arbitrary class of recursive functions.Finally, we consider some generalizations of elm-learning. One of them is of the same power as usual learning by teams. A further generalization even allows to learn the class of all recursive functions.