Theory refinement on Bayesian networks

  • Authors:
  • Wray Buntine

  • Affiliations:
  • RIACS, NASA Ames Research Center, Moffet Field, CA

  • Venue:
  • UAI'91 Proceedings of the Seventh conference on Uncertainty in Artificial Intelligence
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by "partial theory", "alternative theory representation", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode.