Measuring teachability using variants of the teaching dimension

  • Authors:
  • Frank J. Balbach

  • Affiliations:
  • David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada

  • Venue:
  • Theoretical Computer Science
  • Year:
  • 2008

Quantified Score

Hi-index 5.23

Visualization

Abstract

In a typical algorithmic learning model, a learner has to identify a target object from partial information. Conversely, in a teaching model a teacher has to give information that allows the learners to identify a target object. We devise two variants of the classical teaching model for Boolean concept classes, based on the teaching dimension, and describe them by teaching-dimension-like combinatorial parameters. In the first model, the learners choose consistent hypotheses with least complexity. We show that 1-decision lists are the harder to teach the longer they are and that 2-term DNFs are the harder to teach the more terms they have. This contrasts with the teachability results for these classes in the teaching-dimension model. In our second model, the learners choose consistent hypotheses based on the assumption that the teacher is optimal. We show that monomials can be taught with a linear number of examples, whereas some 1-decision lists need exponentially many.