Theory refinement on Bayesian networks
Proceedings of the seventh conference (1991) on Uncertainty in artificial intelligence
Computational Statistics & Data Analysis - Special issue dedicated to Toma´sˇ Havra´nek
Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Parameter Estimation in Stochastic Logic Programs
Machine Learning
Logic, Programming, and PROLOG
Logic, Programming, and PROLOG
UAI '89 Proceedings of the Fifth Annual Conference on Uncertainty in Artificial Intelligence
Stochastic Logic Programs: Sampling, Inference and Applications
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Markov Chain Monte Carlo using Tree-Based Priors on Model Structure
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Dependency networks for inference, collaborative filtering, and data visualization
The Journal of Machine Learning Research
On inclusion-driven learning of bayesian networks
The Journal of Machine Learning Research
Fusion of domain knowledge with data for structural learning in object oriented domains
The Journal of Machine Learning Research
Exact Bayesian Structure Discovery in Bayesian Networks
The Journal of Machine Learning Research
Sparse graphical models for exploring gene expression data
Journal of Multivariate Analysis
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
The Journal of Machine Learning Research
ICML '05 Proceedings of the 22nd international conference on Machine learning
Journal of Artificial Intelligence Research
Parameter learning of logic programs for symbolic-statistical modeling
Journal of Artificial Intelligence Research
A general MCMC method for Bayesian inference in logic-based probabilistic modeling
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Towards integrative causal analysis of heterogeneous data sets and studies
The Journal of Machine Learning Research
Hi-index | 0.00 |
This paper presents and evaluates an approach to Bayesian model averaging where the models are Bayesian nets (BNs). A comprehensive study of the literature on structural priors for BNs is conducted. A number of prior distributions are defined using stochastic logic programs and the MCMC Metropolis-Hastings algorithm is used to (approximately) sample from the posterior. We use proposals which are tightly coupled to the priors which give rise to cheaply computable acceptance probabilities. Experiments using data generated from known BNs have been conducted to evaluate the method. The experiments used 6 different BNs and varied: the structural prior, the parameter prior, the Metropolis-Hasting proposal and the data size. Each experiment was repeated three times with different random seeds to test the robustness of the MCMC-produced results. Our results show that with effective priors (i) robust results are produced and (ii) informative priors improve results significantly.