Generalized notions of mind change complexity

  • Authors:
  • Arun Sharma;Frank Stephan;Yuri Ventsov

  • Affiliations:
  • National ICT Australia Ltd, Sydney Research Laboratory at Kensington, The University of New South Wales, Sydney NS W2052, Australia;National ICT Australia Ltd, Sydney Research Laboratory at Kensington, The University of New South Wales, Sydney NS W2052, Australia;University of Technology Sydney, Faculty of Information Technology, Sydney, P.O. Box 123, Broadway NSW 2007, Australia

  • Venue:
  • Information and Computation
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gold introduced the notion of learning in the limit where a class S is learnable iff there is a recursive machine M which reads the course of values of a function f and converges to a program for f whenever f is in S. An important measure for the speed of convergence in this model is the quantity of mind changes before the onset of convergence. The oldest model is to consider a constant bound on the number of mind changes M makes on any input function; such a bound is referred here as type 1. Later this was generalized to a bound of type 2 where a counter ranges over constructive ordinals and is counted down at every mind change. Although ordinal bounds permit the inference of richer concept classes than constant bounds, they still are a severe restriction. Therefore the present work introduces two more general approaches to bounding mind changes. These are based on counting by going down in a linearly ordered set (type 3) and on counting by going down in a partially ordered set (type 4). In both cases the set must not contain infinite descending recursive sequences. These four types of mind changes yield a hierarchy and there are identifiable classes that cannot be learned with the most general mind change bound of type 4. It is shown that existence of type 2 bound is equivalent to the existence of a learning algorithm which converges on every (also nonrecursive) input function and the existence of type 4 is shown to be equivalent to the existence of a learning algorithm which converges on every recursive function. A partial characterization of type 3 yields, a result of independent interest in recursion theory. The interplay between mind change complexity and choice of hypothesis space is investigated. It is established that for certain concept classes, a more expressive hypothesis space can sometimes reduce mind change complexity of learning these classes. The notion of mind change bound for behaviourally correct learning is indirectly addressed by employing the above four types to restrict the number of predictive errors of commission in finite error next value learning (NV'') -- a model equivalent to behaviourally correct learning. Again, natural characterizations for type 2 and type 4 bounds are derived. Their naturalness is further illustrated by characterizing them in terms of branches of uniformly recursive families of binary trees.