Lazy explanation-based learning: a solution to the intractable theory problem

  • Authors:
  • Prasad Tadepalli

  • Affiliations:
  • Department of Computer Science, Rutgers University, New Brunswick, NJ and School of Computer Science, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

Explanation-Based Learning (EBL) depends on the ability of a system to explain to itself, based on the domain theory, that a given training example is a member of the target concept. However, in many complex domains it is often intractable to do this. In this paper I introduce a learning technique called Lazy Explanation-Based Learning as a solution to the problem of intractable explanation process in EBL. This technique is based on the idea that when the domain theory is intractable, it is possible to learn by generalizing incomplete explanations and incrementally refining the over-general knowledge thus learned when met with unexpected plan failures. I describe a program that incrementally learns planning knowledge in game domains through Lazy Explanation-Based Learning. I present both empirical and theoretical evidence for the viability of Lazy Explanation-Based Learning.