The representation of recursive languages and its impact on the efficiency of learning

  • Authors:
  • Steffen Lange

  • Affiliations:
  • HTWK Leipzig, FB Informatik, PF 66, 04251 Leipzig

  • Venue:
  • COLT '94 Proceedings of the seventh annual conference on Computational learning theory
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the present paper we study the learnability of the enumerable families L of uniformly recursive languages in dependence on the number of allowed mind changes, i.e., with respect to a well-studied measure of efficiency.We distinguish between exact learnability ( L has to be learnt w.r.t. the hypothesis space L itself), class preserving learning ( L has to be inferred w.r.t. some hypothesis space G having the same range as L ), and class comprising inference ( L has to be inferred w.r.t. some hypothesis space G that has a range including range ( L )) as well as between learning from positive and negative examples.The measure of efficiency is applied to prove the superiority of class comprising learning algorithms over class preserving learning which itself turns out to be superior to exact learning algorithms. In particular, we considerably improve results obtained previously and show that a suitable choice of the hypothesis space may result in a considerable speed up of learning algorithms, even if instead of positive and negative data only positive examples will be presented. Furthermore, we completely separate all modes of learning with a bounded number of mind changes from class preserving learning that avoids overgeneralization.