Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Learning Belief Networks in the Presence of Missing Values and Hidden Variables
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Simulation Approaches to General Probabilistic Inference on Belief Networks
UAI '89 Proceedings of the Fifth Annual Conference on Uncertainty in Artificial Intelligence
Learning Bayesian networks for discrete data
Computational Statistics & Data Analysis
Causal discovery from a mixture of experimental and observational data
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
A Bayesian method for causal modeling and discovery under selection
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
The Bayesian structural EM algorithm
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
A Bayesian approach to learning causal networks
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Artificial Intelligence in Medicine
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.03 |
This paper describes a Bayesian method for learning causal Bayesian networks through networks that contain latent variables from an arbitrary mixture of observational and experimental data. The paper presents Bayesian methods (including a new method) for learning the causal structure and parameters of the underlying causal process that is generating the data, given that the data contain a mixture of observational and experimental cases. These learning methods were applied using as input various mixtures of experimental and observational data that were generated from the ALARM causal Bayesian network. The paper reports how these structure predictions and parameter estimates compare with the true causal structures and parameters as given by the ALARM network. The paper shows that (1) the new method for learning Bayesian network structure from a mixture of data that this paper introduce, the Gibbs Volume method, best estimates the probability of the data, given the latent variable model and (2) using large data (10,000 cases), another model, the implicit latent variable method, is asymptotically correct and efficient.