Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
On inclusion-driven learning of bayesian networks
The Journal of Machine Learning Research
Improved learning of Bayesian networks
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Approximation Methods for Efficient Learning of Bayesian Networks
Proceedings of the 2008 conference on Approximation Methods for Efficient Learning of Bayesian Networks
Hi-index | 0.00 |
We propose a Bayesian method for learning Bayesian network models using Markov chain Monte Carlo (MCMC). In contrast to most existing MCMC approaches that define components in term of single edges, our approach is to decompose a Bayesian network model in larger dependence components defined by Markov blankets. The idea is based on the fact that MCMC performs significantly better when choosing the right decomposition, and that edges in the Markov blanket of the vertices form a natural dependence relationship. Using the ALARM and Insurance networks, we show that this decomposition allows MCMC to mix more rapidly, and is less prone to getting stuck in local maxima compared to the single edge approach.