Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Learning Belief Networks in the Presence of Missing Values and Hidden Variables
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Learning the Dimensionality of Hidden Variables
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Exact Bayesian Structure Discovery in Bayesian Networks
The Journal of Machine Learning Research
Learning Hidden Variable Networks: The Information Bottleneck Approach
The Journal of Machine Learning Research
Computing posterior probabilities of structural features in Bayesian networks
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Causal inference in the presence of latent variables and selection bias
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Bayesian networks (BNs) are an appealing model for causal and noncausal dependencies among a set of variables. Learning BNs from observational data is challenging due to the nonidentifiability of the network structure and model misspecification in the presence of unobserved (latent) variables. Here, we investigate the prospects of Bayesian learning of ancestor relations, including arcs, in the presence and absence of unobserved variables. An exact dynamic programming algorithm to compute the respective posterior probabilities is developed, under the complete data assumption. Our experimental results show that ancestor relations between observed variables, arcs in particular, can be learned with good power even when a majority of the involved variables are unobserved. For comparison, deduction of ancestor relations from single maximum a posteriori network structures or their Markov equivalence class appears somewhat inferior to Bayesian averaging. We also discuss some shortcomings of applying existing conditional independence test based methods for learning ancestor relations.