Characterizing diagnoses and systems
Artificial Intelligence
A comparison of decision alaysis and expert rules for sequential diagnosis
UAI '88 Proceedings of the Fourth Annual Conference on Uncertainty in Artificial Intelligence
What is the most likely diagnosis?
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
Reversible Jump MCMC Simulated Annealing for Neural Networks
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Qualtitative propagation and scenario-based scheme for exploiting probabilistic reasoning
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
MAP complexity results and approximation methods
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Defining explanation in probabilistic systems
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Abductive inference in bayesian networks: finding a partition of the explanation space
ECSQARU'05 Proceedings of the 8th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Most Relevant Explanation: properties, algorithms, and evaluations
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Most Relevant Explanation: computational complexity and approximation methods
Annals of Mathematics and Artificial Intelligence
Most relevant explanation in Bayesian networks
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Many existing explanation methods in Bayesian networks, such as Maximum a Posteriori (MAP) assignment and Most Probable Explanation (MPE), generate complete assignments for target variables. A priori, the set of target variables is often large, but only a few of them may be most relevant in explaining given evidence. Generating explanations with all the target variables is hence not always desirable. This paper addresses the problem by proposing a new framework called Most Relevant Explanation (MRE), which aims to automatically identify the most relevant target variables. We will also discuss in detail a specific instance of the framework that uses generalized Bayes factor as the relevance measure. Finally we will propose an approximate algorithm based on Reversible Jump MCMC and simulated annealing to solve MRE. Empirical results show that the new approach typically finds much more concise explanations than existing methods.