Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
SIAM Journal on Computing
On the intrinsic complexity of learning
Information and Computation
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Introduction to Algorithms
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
On learning to coordinate: random bits help, insightful normal forms, and competency isomorphisms
Journal of Computer and System Sciences - Special issue: Learning theory 2003
Iterative learning from positive data and counters
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Learning secrets interactively. Dynamic modeling in inductive inference
Information and Computation
Memory-limited non-U-shaped learning with solved open problems
Theoretical Computer Science
Iterative learning from positive data and counters
Theoretical Computer Science
Hi-index | 0.01 |
Introduced is a new inductive inference paradigm, Dynamic Modeling. Within this learning paradigm, for example, function hlearnsfunction giff, in the i-th iteration, hand gboth produce output, hgets the sequence of all outputs from gin prior iterations as input, ggets all the outputs from hin prior iterations as input, and, from some iteration on, the sequence of h's outputs will be programs forthe output sequenceof g.Dynamic Modeling provides an idealization of, for example, a social interaction in which hseeks to discover program models of g's behavior it sees in interacting with g, and hopenlydiscloses to gits sequence of candidate program models to see what gsays back.Sampleresults: every gcan be so learned by some h; there are gthat can only be learned by an hif gcan also learn that hback; there are extremely secretive hwhich cannot be learned back by any gthey learn, but which, nonetheless, succeed in learning infinitely many g; quadratictime learnablity is strictly more powerful than lintime learnablity.This latter result, as well as others, follow immediately from general correspondence theorems obtained from a unifiedapproach to the paradigms within inductive inference.Many proofs, some sophisticated, employ machine self-reference, a.k.a., recursion theorems.