IEEE Transactions on Systems, Man and Cybernetics - Special issue on artificial intelligence
Operations Research
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Advances in probabilistic reasoning
Proceedings of the seventh conference (1991) on Uncertainty in artificial intelligence
Algorithms for irrelevance-based partial MAPs
Proceedings of the seventh conference (1991) on Uncertainty in artificial intelligence
A Probabilistic Framework for Explanation
A Probabilistic Framework for Explanation
A Probabilistic Approach to Language Understanding
A Probabilistic Approach to Language Understanding
A logic for semantic interpretation
ACL '88 Proceedings of the 26th annual meeting on Association for Computational Linguistics
ACL '88 Proceedings of the 26th annual meeting on Association for Computational Linguistics
Probabilistic semantics for cost based abduction
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 1
Explanation, irrelevance and statistical independence
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
Relevance-based explanation is a scheme in which partial assignments to Bayesian belief network variables are explanations (abductire conclusions). We allow variables to remain unassigned in explanations as long as they are irrelevant to the explanation, where irrelevance is defined in terms of statistical independence. When multiple-valued variables exist in the system, especially when subsets of values correspond to natural types of events, the overspecification problem, alleviated by independence-based explanation, resurfaces. As a solution to that, as well as for addressing the question of explanation specificity, it is desirable to collapse such a subset of values into a single value on the fly. The equivalent method, which is adopted here, is to generalize the notion of assignments to allow disjunctive assignments. We proceed to define generalized independence based explanations as maximum posterior probability independence based generalized assignments (GIB-MAPs). GIB assignments are shown to have certain properties that ease the design of algorithms for computing GIB-MAPs. One such algorithm is discussed here, as well as suggestions for how other algorithms may be adapted to compute GIB-MAPs. GIB-MAP explanations still suffer from instability, a problem which may be addressed using "approximate" conditional independence as a condition for irrelevance.