Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Cost-based abduction and MAP explanation
Artificial Intelligence
Partial abductive inference in Bayesian belief networks using a genetic algorithm
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Simplifying explanations in Bayesian belief networks
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
A review of explanation methods for Bayesian networks
The Knowledge Engineering Review
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Modeling and Reasoning with Bayesian Networks
Modeling and Reasoning with Bayesian Networks
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Complexity results and approximation strategies for MAP explanations
Journal of Artificial Intelligence Research
Most probable explanations in Bayesian networks: Complexity and tractability
International Journal of Approximate Reasoning
Approximating MAP using local search
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Solving MAP exactly using systematic search
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
New complexity results for MAP in Bayesian networks
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Parameterized Complexity
Hi-index | 0.00 |
The problems of generating candidate hypotheses and inferring the best hypothesis out of this set are typically seen as two distinct aspects of the more general problem of non-demonstrative inference or abduction. In the context of Bayesian networks the latter problem (computing most probable explanations) is well understood, while the former problem is typically left as an exercise to the modeler. In other words, the candidate hypotheses are pre-selected and hard-coded. In reality, however, non-demonstrative inference is rather an interactive process, switching between hypothesis generation, inference to the best explanation, evidence gathering and deciding which information is relevant. In this paper we will discuss a possible computational formalization of finding an explanation which is both probable and as informative as possible, thereby combining (at least some aspects of) both the 'hypotheses-generating' and 'inference' steps of the abduction process. The computational complexity of this formal problem, denoted Most Inforbable Explanation, is then established and some problem parameters are investigated in order to get a deeper understanding of what makes this problem intractable in general, and under which circumstances the problem becomes tractable.