Learning by Failing to Explain: Using Partial Explanations to Learn in Incomplete or Intractable Domains

  • Authors:
  • Robert J. Hall

  • Affiliations:
  • Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02189, U.S.A. RJH@WHEATIES.AI.MIT.EDU

  • Venue:
  • Machine Learning
  • Year:
  • 1988

Quantified Score

Hi-index 0.00

Visualization

Abstract

Explanation-based learning depends on having an explanation on which to base generalization. Thus, a system with an incomplete or intractable domain theory cannot use this method to learn from every precedent. However, in such cases the system need not resort to purely empirical generalization methods, because it may already know almost everything required to explain the precedent. Learning by failing to explain is a method that uses current knowledge to prune the well-understood portions of complex precedents (and rules) so that what remains may be conjectured as a new rule. This paper describes precedent analysis, partial explanation of a precedent (or rule) to isolate the new technique(s) it embodies, and rule reanalysis, which involves analyzing old rules in terms of new rules to obtain a more general set. The algorithms PA, PA-RR, and PA-RR-GW implement these ideas in the domains of digital circuit design and simplified gear design.