A Study of Explanation-Based Methods for Inductive Learning

  • Authors:
  • Nicholas S. Flann;Thomas G. Dietterich

  • Affiliations:
  • Department of Computer Science, Oregon State University, Corvallis, Oregon 97331-3902. FLANN@CS.ORST.EDU;Department of Computer Science, Oregon State University, Corvallis, Oregon 97331-3902. TGD@CS.ORST.EDU

  • Venue:
  • Machine Learning
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper formalizes a new learning-from-examples problem: identifying a correct concept definition from positive examples such that the concept is some specialization of a target concept defined by a domain theory. It describes an empirical study that evaluates three methods for solving this problem: explanation-based generalization (EBG), multiple example explanation-based generalization (mEBG), and a new method, induction over explanations (IOE). The study demonstrates that the two existing methods (EBG and mEBG) exhibit two shortcomings: (a) they rarely identify the correct definition, and (b) they are brittle in that their success depends greatly on the choice of encoding of the domain theory rules. The study demonstrates that the new method, IOE, does not exhibit these shortcomings. This method applies the domain theory to construct explanations from multiple training examples as in mEBG, but forms the concept definition by employing a similarity-based generalization policy over the explanations. IOE has the advantage that an explicit domain theory can be exploited to aid the learning process, the dependence on the initial encoding of the domain theory is significantly reduced, and the correct concepts can be learned from few examples. The study evaluates the methods in the context of an implemented system, called Wyl2, which learns a variety of concepts in chess including “skewer” and “knight-fork.”