Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Finding MAPs for belief networks is NP-hard
Artificial Intelligence
Approximating MAPs for belief networks is NP-hard and other theorems
Artificial Intelligence
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
On the hardness of approximating N P witnesses
APPROX '00 Proceedings of the Third International Workshop on Approximation Algorithms for Combinatorial Optimization
Proofs, Codes, and Polynomial-Time Reducibilities
COCO '99 Proceedings of the Fourteenth Annual IEEE Conference on Computational Complexity
A review of explanation methods for Bayesian networks
The Knowledge Engineering Review
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Most probable explanations in Bayesian networks: Complexity and tractability
International Journal of Approximate Reasoning
Probabilities for a probabilistic network: a case study in oesophageal cancer
Artificial Intelligence in Medicine
Hi-index | 0.00 |
Typically, when one discusses approximation algorithms for (NP-hard) problems (like Traveling Salesperson, Vertex Cover, Knapsack), one refers to algorithms that return a solution whose value is (at least ideally) close to optimal; e.g., a tour with almost minimal length, a vertex cover of size just above minimal, or a collection of objects that has close to maximal value. In contrast, one might also be interested in approximation algorithms that return solutions that resemble the optimal solutions, i.e., whose structure is akin to the optimal solution, like a tour that is almost similar to the optimal tour, a vertex cover that differs in only a few vertices from the optimal cover, or a collection that is similar to the optimal collection. In this paper, we discuss structure-approximation of the problem of finding the most probable explanation of observations in Bayesian networks, i.e., finding a joint value assignment that looks like the most probable one, rather than has an almost as high value. We show that it is NP-hard to obtain the value of just a single variable of the most probable explanation. However, when partial orders on the values of the variables are available, we can improve on these results.