Avoiding coding tricks by hyperrobust learning

  • Authors:
  • Matthias Ott;Frank Stephan

  • Affiliations:
  • Institut für Logik, Komplexität und Deduktionssysteme, Universität Karlsruhe, 76128 Karlsruhe, Germany;Mathematisches Institut, Im Neuenheimer Feld 294, Universität Heidelberg, 69121 Heidelberg, Germany

  • Venue:
  • Theoretical Computer Science
  • Year:
  • 2002

Quantified Score

Hi-index 5.23

Visualization

Abstract

The present work introduces and justifies the notion of hyperrobust learning where one fixed learner has to learn all functions in a given class plus their images under primitive recursive operators. The following are shown: The notion of learnability does not change if the class of primitive recursive operators is replaced by a larger enumerable class of operators. A class is hyperrobustly Ex-learnable iff it is a subclass of a recursively enumerable family of total functions. So, the notion of hyperrobust learning overcomes a problem of the traditional definitions of robustness which either do not preserve learning by enumeration or still permit topological coding tricks for the learning criterion Ex. Hyperrobust BC-learning as well as the hyperrobust version of Ex-learning by teams are more powerful than hyperrobust Ex-learning. The notion of bounded totalty reliable BC-learning is properly between hyperrobust Ex-learning and hyperrobust BC-learning. Furthermore, the bounded totally reliable BC-learnable classes are characterized in terms of infinite branches of certain enumerable families of bounded recursive trees. A class of infinite branches of another family of trees separates hyperrobust BC-learning from totally reliable BC-learning. Furthermore, the notion of hyperrobust learning aided by selected context turns out to be much more restrictive than its counterpart for robust learning.