A general framework for generating multivariate explanations in Bayesian networks

  • Authors:
  • Changhe Yuan;Tsai-Ching Lu

  • Affiliations:
  • Department of Computer Science and Engineering, Mississippi State University, Mississippi State, MS;HRL Laboratories, LLC, Malibu, CA

  • Venue:
  • AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many existing explanation methods in Bayesian networks, such as Maximum a Posteriori (MAP) assignment and Most Probable Explanation (MPE), generate complete assignments for target variables. A priori, the set of target variables is often large, but only a few of them may be most relevant in explaining given evidence. Generating explanations with all the target variables is hence not always desirable. This paper addresses the problem by proposing a new framework called Most Relevant Explanation (MRE), which aims to automatically identify the most relevant target variables. We will also discuss in detail a specific instance of the framework that uses generalized Bayes factor as the relevance measure. Finally we will propose an approximate algorithm based on Reversible Jump MCMC and simulated annealing to solve MRE. Empirical results show that the new approach typically finds much more concise explanations than existing methods.