Relevant explanations: allowing disjunctive assignments

  • Authors:
  • Solomon Eyal Shimony

  • Affiliations:
  • Math. and Computer Science Department, Ben Gurion University of the Negev, Israel

  • Venue:
  • UAI'93 Proceedings of the Ninth international conference on Uncertainty in artificial intelligence
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

Relevance-based explanation is a scheme in which partial assignments to Bayesian belief network variables are explanations (abductire conclusions). We allow variables to remain unassigned in explanations as long as they are irrelevant to the explanation, where irrelevance is defined in terms of statistical independence. When multiple-valued variables exist in the system, especially when subsets of values correspond to natural types of events, the overspecification problem, alleviated by independence-based explanation, resurfaces. As a solution to that, as well as for addressing the question of explanation specificity, it is desirable to collapse such a subset of values into a single value on the fly. The equivalent method, which is adopted here, is to generalize the notion of assignments to allow disjunctive assignments. We proceed to define generalized independence based explanations as maximum posterior probability independence based generalized assignments (GIB-MAPs). GIB assignments are shown to have certain properties that ease the design of algorithms for computing GIB-MAPs. One such algorithm is discussed here, as well as suggestions for how other algorithms may be adapted to compute GIB-MAPs. GIB-MAP explanations still suffer from instability, a problem which may be addressed using "approximate" conditional independence as a condition for irrelevance.