A Sufficient Condition for Backtrack-Free Search
Journal of the ACM (JACM)
Bucket elimination: a unifying framework for reasoning
Artificial Intelligence
Towards a universal test suite for combinatorial auction algorithms
Proceedings of the 2nd ACM conference on Electronic commerce
A general scheme for automatic generation of search heuristics from specification dependencies
Artificial Intelligence - Special issue on heuristic search in artificial intelligence
Nonserial Dynamic Programming
Mini-buckets: A general scheme for bounded inference
Journal of the ACM (JACM)
Using weighted MAX-SAT engines to solve MPE
Eighteenth national conference on Artificial intelligence
Constraint Processing
Inference in the Promedas Medical Expert System
AIME '07 Proceedings of the 11th conference on Artificial Intelligence in Medicine
Iterative join-graph propagation
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
Mini-Bucket Elimination (MBE) is a well-known approximation of Bucket Elimination (BE), deriving bounds on quantities of interest over graphical models. Both algorithms are based on the sequential transformation of the original problem by eliminating variables, one at a time. The order in which variables are eliminated is usually computed using the greedy min-fill heuristic. In the BE case, this heuristic has a clear intuition, because it faithfully represents the structure of the sequence of sub-problems that BE generates and orders the variables using a greedy criteria based on such structure. However, MBE produces a sequence of sub-problems with a different structure. Therefore, using the min-fill heuristic with MBE means that decisions are made using the structure of the sub-problems that BE would produce, which is clearly meaningless. In this paper we propose a modification of the min-fill ordering heuristic that takes into account this fact. Our experiments on a number of benchmarks over two important tasks (i.e., computing the probability of evidence and optimization) show that MBE using the new ordering is often far more accurate than using the standard one.