Goodness-of-fit techniques
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Propagating imprecise probabilities in Bayesian networks
Artificial Intelligence
Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Bucket elimination: a unifying framework for probabilistic inference
Learning in graphical models
A tutorial on learning with Bayesian networks
Learning in graphical models
ACM Computing Surveys (CSUR)
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Database System Concepts
A Differential Approach to Inference in Bayesian Networks
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Making Sensitivity Analysis Computationally Efficient
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Confidence Inference in Bayesian Networks
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Bayesian Error-Bars for Belief Net Inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Using query-specific variance estimates to combine Bayesian classifiers
ICML '06 Proceedings of the 23rd international conference on Machine learning
Discriminative model selection for belief net structures
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Learning Bayesian nets that perform well
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Computational advantages of relevance reasoning in Bayesian belief networks
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Context-specific independence in Bayesian networks
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Query DAGs: a practical paradigm for implementing belief-network inference
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence
UAI'93 Proceedings of the Ninth international conference on Uncertainty in artificial intelligence
Sensitivity analysis in discrete Bayesian networks
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Improved mean and variance approximations for belief net responses via network doubling
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Review: learning bayesian networks: Approaches and issues
The Knowledge Engineering Review
Good practice in Bayesian network modelling
Environmental Modelling & Software
Modelling relational statistics with Bayes Nets
Machine Learning
Hi-index | 0.00 |
A Bayesian belief network models a joint distribution over variables using a DAG to represent variable dependencies and network parameters to represent the conditional probability of each variable given an assignment to its immediate parents. Existing algorithms assume each network parameter is fixed. From a Bayesian perspective, however, these network parameters can be random variables that reflect uncertainty in parameter estimates, arising because the parameters are learned from data, or because they are elicited from uncertain experts. Belief networks are commonly used to compute responses to queries-i.e., return a number for P(H=h|E=e). Parameter uncertainty induces uncertainty in query responses, which are thus themselves random variables. This paper investigates this query response distribution, and shows how to accurately model this distribution for any query and any network structure. In particular, we prove that the query response is asymptotically Gaussian and provide its mean value and asymptotic variance. Moreover, we present an algorithm for computing these quantities that has the same worst-case complexity as inference in general, and also describe straight-line code when the query includes all n variables. We provide empirical evidence that (1) our approximation of the variance is very accurate, and (2) a Beta distribution with these moments provides a very accurate model of the observed query response distribution. We also show how to use this to produce accurate error bars around these responses-i.e., to determine that the response to P(H=h|E=e) is x+/-y with confidence 1-@d.