Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Relevance-Based Sequential Evidence Processing in Bayesian Networks
Proceedings of the Eleventh International Florida Artificial Intelligence Research Society Conference
The complexity of theorem-proving procedures
STOC '71 Proceedings of the third annual ACM symposium on Theory of computing
Feature Selection Algorithms: A Survey and Experimental Evaluation
ICDM '02 Proceedings of the 2002 IEEE International Conference on Data Mining
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
A general framework for generating multivariate explanations in Bayesian networks
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Complexity results and approximation strategies for MAP explanations
Journal of Artificial Intelligence Research
The computational complexity of probabilistic planning
Journal of Artificial Intelligence Research
Most Relevant Explanation: properties, algorithms, and evaluations
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Reversible jump MCMC simulated annealing for neural networks
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Approximating MAP using local search
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Most relevant explanation in Bayesian networks
Journal of Artificial Intelligence Research
Learning optimal bayesian networks: a shortest path perspective
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Most Relevant Explanation (MRE) is the problem of finding a partial instantiation of a set of target variables that maximizes the generalized Bayes factor as the explanation for given evidence in a Bayesian network. MRE has a huge solution space and is extremely difficult to solve in large Bayesian networks. In this paper, we first prove that MRE is at least NP-hard. We then define a subproblem of MRE called MRE k that finds the most relevant k-ary explanation and prove that the decision problem of MRE k is $NP^{\it PP}$ -complete. Since MRE needs to find the best solution by MRE k over all k, and we can also show that MRE is in $NP^{\it PP}$ , we conjecture that a decision problem of MRE is $NP^{\it PP}$ -complete as well. Furthermore, we show that MRE remains in $NP^{\it PP}$ even if we restrict the number of target variables to be within a log factor of the number of all unobserved variables. These complexity results prompt us to develop a suite of approximation algorithms for solving MRE, One algorithm finds an MRE solution by integrating reversible-jump MCMC and simulated annealing in simulating a non-homogeneous Markov chain that eventually concentrates its mass on the mode of a distribution of the GBF scores of all solutions. The other algorithms are all instances of local search methods, including forward search, backward search, and tabu search. We tested these algorithms on a set of benchmark diagnostic Bayesian networks. Our empirical results show that these methods could find optimal MRE solutions for most of the test cases in our experiments efficiently.