Artificial Intelligence
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Explanation in Bayesian belief networks
Explanation in Bayesian belief networks
Real-world applications of Bayesian networks
Communications of the ACM
Decision-theoretic troubleshooting
Communications of the ACM
Simplifying explanations in Bayesian belief networks
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Relevance-Based Sequential Evidence Processing in Bayesian Networks
Proceedings of the Eleventh International Florida Artificial Intelligence Research Society Conference
A comparison of decision alaysis and expert rules for sequential diagnosis
UAI '88 Proceedings of the Fourth Annual Conference on Uncertainty in Artificial Intelligence
What is the most likely diagnosis?
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Qualtitative propagation and scenario-based scheme for exploiting probabilistic reasoning
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
ICCBR '01 Proceedings of the 4th International Conference on Case-Based Reasoning: Case-Based Reasoning Research and Development
A review of explanation methods for Bayesian networks
The Knowledge Engineering Review
Rule Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project (The Addison-Wesley series in artificial intelligence)
Modeling and Reasoning with Bayesian Networks
Modeling and Reasoning with Bayesian Networks
A general framework for generating multivariate explanations in Bayesian networks
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Most Relevant Explanation: properties, algorithms, and evaluations
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
On the role of coherence in abductive explanation
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 1
Defining explanation in probabilistic systems
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Abductive inference in bayesian networks: finding a partition of the explanation space
ECSQARU'05 Proceedings of the 8th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Explanation of Bayesian Networks and Influence Diagrams in Elvira
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Most Relevant Explanation: computational complexity and approximation methods
Annals of Mathematics and Artificial Intelligence
Learning optimal bayesian networks: a shortest path perspective
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
A major inference task in Bayesian networks is explaining why some variables are observed in their particular states using a set of target variables. Existing methods for solving this problem often generate explanations that are either too simple (underspecified) or too complex (overspecified). In this paper, we introduce a method called Most Relevant Explanation (MRE) which finds a partial instantiation of the target variables that maximizes the generalized Bayes factor (GBF) as the best explanation for the given evidence. Our study shows that GBF has several theoretical properties that enable MRE to automatically identify the most relevant target variables in forming its explanation. In particular, conditional Bayes factor (CBF), defined as the GBF of a new explanation conditioned on an existing explanation, provides a soft measure on the degree of relevance of the variables in the new explanation in explaining the evidence given the existing explanation. As a result, MRE is able to automatically prune less relevant variables from its explanation. We also show that CBF is able to capture well the explaining-away phenomenon that is often represented in Bayesian networks. Moreover, we define two dominance relations between the candidate solutions and use the relations to generalize MRE to find a set of top explanations that is both diverse and representative. Case studies on several benchmark diagnostic Bayesian networks show that MRE is often able to find explanatory hypotheses that are not only precise but also concise.