Using prior learning to facilitate the learning of new causal theories

  • Authors:
  • Michael Pazzani;Michael Oyer;Margot Flowers

  • Affiliations:
  • UCLA, Artificial Intelligence Laboratory, Los Angeles, CA;UCLA, Artificial Intelligence Laboratory, Los Angeles, CA;UCLA, Artificial Intelligence Laboratory, Los Angeles, CA

  • Venue:
  • IJCAI'87 Proceedings of the 10th international joint conference on Artificial intelligence - Volume 1
  • Year:
  • 1987

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an approach to learning causal knowledge which lies in between two extremely different approaches to learning: • empirical methods (e.g., [12,17]) which detect similarities and differences between between examples to reveal regularities. • explanation-based methods (e.g., [13,4]) which derive a causal explanation for a single event from existing causal knowledge. The event and the causal explanation are generalized to create a new "chunk" of causal knowledge by retaining only those features of the event which were needed to produce the explanation. In the approach to learning presented in this paper and implemented in a program called OCCAM, prior knowledge indicating what sort of distinctions have proven useful in the past influences the search for causal hypotheses. Our approach to learning snares a goal with explanation-based learning: to allow existing knowledge to facilitate future learning so that fewer examples are required. However, it does not share one shortcoming of explanation-based learning since it can create causal theories which are not implications of existing causal theories.