Integrating inductive neural network learning and explanation-based learning

  • Authors:
  • Sebastian B. Thrun;Tom M. Mitchell

  • Affiliations:
  • Dept. of Computer Science III, University of Bonn, Bonn, Germany;School of Computer Science, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 2
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many researchers have noted the importance of combining inductive and analytical learning, yet we still lack combined learning methods that are effective in practice. We present here a learning method that combines explanation-based learning from a previously learned approximate domain theory, together with inductive learning from observations. This method, called explanation-based neural network learning (EBNN), is based on a neural network representation of domain knowledge. Explanations are constructed by chaining together inferences from multiple neural networks. In contrast with symbolic approaches to explanation-based learning which extract weakest preconditions from the explanation, EBNN extracts the derivatives of the target concept with respect to the training example features. These derivatives summarize the dependencies within the explanation, and are used to bias the inductive learning of the target concept. Experimental results on a simulated robot control task show that EBNN requires significantly fewer training examples than standard inductive learning. Furthermore, the method is shown to be robust to errors in the domain theory, operating effectively over a broad spectrum from very strong to very weak domain theories.