Adapting connectionist learning to Bayes networks
International Journal of Approximate Reasoning
aHUGIN: a system creating adaptive causal probabilistic networks
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
Introduction to Bayesian Networks
Introduction to Bayesian Networks
Local learning in probabilistic networks with hidden variables
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Lazy propagation in junction trees
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
A differential approach to inference in Bayesian networks
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Making sensitivity analysis computationally efficient
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
When do numbers really matter?
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Hi-index | 0.01 |
As shown by Russel et al., 1995 [7], Bayesian networks can be equipped with a gradient descent learning method similar to the training method for neural networks. The calculation of the required gradients can be performed locally along with propagation. We review how this can be done, and we show how the gradient descent approach can be used for various tasks like tuning and training with training sets of definite as well as non-definite classifications. We introduce tools for resistance and damping to guide the direction of convergence, and we use them for a new adaptation method which can also handle situations where parameters in the network covary.