Update rules for parameter estimation in Bayesian networks

  • Authors:
  • Eric Bauer;Daphne Koller;Yoram Singer

  • Affiliations:
  • Stanford University;Stanford University;AT&T Labs

  • Venue:
  • UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper re-examines the problem of parameter estimation in Bayesian networks with missing values and hidden variables from the perspective of recent work in on-line learning [13]. We provide a unified framework for parameter estimation that encompasses both on-line learning, where the model is continuously adapted to new data cases as they arrive, and the more traditional batch learning, where a pre-accumulated set of samples is used in a one-time model selection process. In the batch case, our framework encompasses both the gradient projection algorithm [2, 3] and the EM algorithm [15] for Bayesian networks. The framework also leads to new on-line and batch parameter update schemes, including a parameterized version of EM. We provide both empirical and theoretical results indicating that parameterized EM allows faster convergence to the maximum likelihood parameters than does standard EM.