Learning Conjunctive Concepts in Structural Domains

  • Authors:
  • David Haussler

  • Affiliations:
  • Department of Computer Science, University of California, Santa Cruz, CA 95064 USA. HAUSSLER@SATURN.UCSC.EDU

  • Venue:
  • Machine Learning
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study the problem of learning conjunctive concepts from examples on structural domains like the blocks world. This class of concepts is formally defined, and it is shown that even for samples in which each example (positive or negative) is a two-object scene, it is NP-complete to determine if there is any concept in this class that is consistent with the sample. We demonstrate how this result affects the feasibility of Mitchell's version of space approach and how it shows that it is unlikely that this class of concepts is polynomially learnable from random examples alone in the PAC framework of Valiant. On the other hand, we show that for any fixed bound on the number of objects per scene, this class is polynomially learnable if, in addition to providing random examples, we allow the learning algorithm to make subset queries. In establishing this result, we calculate the capacity of the hypothesis space of conjunctive concepts in a structural domain and use a general theorem of Vapnik and Chervonenkis. This latter result can also be used to estimate a sample size sufficient for heuristic learning techniques that do not use queries.