Elements of information theory
Elements of information theory
Acting optimally in partially observable stochastic domains
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Markov random field modeling in computer vision
Markov random field modeling in computer vision
Some optimal inapproximability results
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
A Theoretical Framework of Hybrid Approaches to MAX SAT
ISAAC '97 Proceedings of the 8th International Symposium on Algorithms and Computation
Reinforcement Learning in POMDP's via Direct Gradient Ascent
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Approximating the value of two power proof systems, with applications to MAX 2SAT and MAX DICUT
ISTCS '95 Proceedings of the 3rd Israel Symposium on the Theory of Computing Systems (ISTCS'95)
Optimal testing of structured knowledge
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Complexity results and approximation strategies for MAP explanations
Journal of Artificial Intelligence Research
Optimal value of information in graphical models
Journal of Artificial Intelligence Research
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
Model-based adaptive spatial sampling for occurrence map construction
Statistics and Computing
Computational Statistics & Data Analysis
Hi-index | 0.00 |
Computation of the Most Probable Explanation (MPE) when probabilistic knowledge is expressed as a factored distribution is a classical AI reasoning problem: complete evidence is available about the values of some of the variables which are observed, and the problem consists in finding the most probable assignment of the remaining variables given the evidence. However, optimising the choice of the variables to observe (the sample) in order to maximise the MPE probability is a less classical and more difficult problem. In this article we tackle this question of optimal sampling in structured problems under limited budget, within the framework of Hidden Markov Random Fields (HMRF). The value of a sample (which we seek to optimise) is the expectation, over all possible sample outputs (observations), of the MPE probability. The contributions of this article are: i) an original probabilistic model for optimal sampling in HMRF ii) computational complexity results about this problem, leading in particular to approximability/inapproximability results and iii) an exact solution algorithm and two approximate solution algorithms of decreasing time complexity, which we empirically evaluate on a problem of spatial sampling for occurrence map restoration.