Gradient Descent Training of Bayesian Networks

  • Authors:
  • Finn Verner Jensen

  • Affiliations:
  • -

  • Venue:
  • ECSQARU '95 Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty
  • Year:
  • 1999

Quantified Score

Hi-index 0.01

Visualization

Abstract

As shown by Russel et al., 1995 [7], Bayesian networks can be equipped with a gradient descent learning method similar to the training method for neural networks. The calculation of the required gradients can be performed locally along with propagation. We review how this can be done, and we show how the gradient descent approach can be used for various tasks like tuning and training with training sets of definite as well as non-definite classifications. We introduce tools for resistance and damping to guide the direction of convergence, and we use them for a new adaptation method which can also handle situations where parameters in the network covary.