Most inforbable explanations: finding explanations in bayesian networks that are both probable and informative

  • Authors:
  • Johan Kwisthout

  • Affiliations:
  • Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands

  • Venue:
  • ECSQARU'13 Proceedings of the 12th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The problems of generating candidate hypotheses and inferring the best hypothesis out of this set are typically seen as two distinct aspects of the more general problem of non-demonstrative inference or abduction. In the context of Bayesian networks the latter problem (computing most probable explanations) is well understood, while the former problem is typically left as an exercise to the modeler. In other words, the candidate hypotheses are pre-selected and hard-coded. In reality, however, non-demonstrative inference is rather an interactive process, switching between hypothesis generation, inference to the best explanation, evidence gathering and deciding which information is relevant. In this paper we will discuss a possible computational formalization of finding an explanation which is both probable and as informative as possible, thereby combining (at least some aspects of) both the 'hypotheses-generating' and 'inference' steps of the abduction process. The computational complexity of this formal problem, denoted Most Inforbable Explanation, is then established and some problem parameters are investigated in order to get a deeper understanding of what makes this problem intractable in general, and under which circumstances the problem becomes tractable.