Explaining inferences in Bayesian networks

  • Authors:
  • Ghim-Eng Yap;Ah-Hwee Tan;Hwee-Hwa Pang

  • Affiliations:
  • School of Computer Engineering, Nanyang Technological University, Singapore, Singapore 639798;School of Computer Engineering, Nanyang Technological University, Singapore, Singapore 639798;School of Information Systems, Singapore Management University, Singapore, Singapore 178902

  • Venue:
  • Applied Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

While Bayesian network (BN) can achieve accurate predictions even with erroneous or incomplete evidence, explaining the inferences remains a challenge. Existing approaches fall short because they do not exploit variable interactions and cannot account for compensations during inferences. This paper proposes the Explaining BN Inferences (EBI) procedure for explaining how variables interact to reach conclusions. EBI explains the value of a target node in terms of the influential nodes in the target's Markov blanket under specific contexts, where the Markov nodes include the target's parents, children, and the children's other parents. Working back from the target node, EBI shows the derivation of each intermediate variable, and finally explains how missing and erroneous evidence values are compensated. We validated EBI on a variety of problem domains, including mushroom classification, water purification and web page recommendation. The experiments show that EBI generates high quality, concise and comprehensible explanations for BN inferences, in particular the underlying compensation mechanism that enables BN to outperform alternative prediction systems, such as decision tree.