Dynamic Modeling in Inductive Inference

  • Authors:
  • John Case;Timo Kötzing

  • Affiliations:
  • Department of Computer and Information Sciences, University of Delaware, Newark, USA DE 19716-2586;Department of Computer and Information Sciences, University of Delaware, Newark, USA DE 19716-2586

  • Venue:
  • ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

Introduced is a new inductive inference paradigm, Dynamic Modeling. Within this learning paradigm, for example, function hlearnsfunction giff, in the i-th iteration, hand gboth produce output, hgets the sequence of all outputs from gin prior iterations as input, ggets all the outputs from hin prior iterations as input, and, from some iteration on, the sequence of h's outputs will be programs forthe output sequenceof g.Dynamic Modeling provides an idealization of, for example, a social interaction in which hseeks to discover program models of g's behavior it sees in interacting with g, and hopenlydiscloses to gits sequence of candidate program models to see what gsays back.Sampleresults: every gcan be so learned by some h; there are gthat can only be learned by an hif gcan also learn that hback; there are extremely secretive hwhich cannot be learned back by any gthey learn, but which, nonetheless, succeed in learning infinitely many g; quadratictime learnablity is strictly more powerful than lintime learnablity.This latter result, as well as others, follow immediately from general correspondence theorems obtained from a unifiedapproach to the paradigms within inductive inference.Many proofs, some sophisticated, employ machine self-reference, a.k.a., recursion theorems.