Learning in Gibbsian Fields: How Accurate and How Fast Can It Be?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hidden Markov Random Field Model Selection Criteria Based on Mean Field-Like Approximations
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Bayesian framework for image segmentation with spatially varying mixtures
IEEE Transactions on Image Processing
Model distribution dependant complexity estimation on textures
ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part III
Hi-index | 754.84 |
We present an analysis of previously proposed Monte Carlo algorithms for estimating the partition function of a Gibbs random field. We show that this problem reduces to estimating one or more expectations of suitable functionals of the Gibbs states with respect to properly chosen Gibbs distributions. As expected, the resulting estimators are consistent. Certain generalizations are also provided. We study computational complexity with respect to grid size and show that Monte Carlo partition function estimation algorithms can be classified into two categories: E-type algorithms that are of exponential complexity and P-type algorithms that are of polynomial complexity, Turing reducible to the problem of sampling from the Gibbs distribution. E-type algorithms require estimating a single expectation, whereas, P-type algorithms require estimating a number of expectations with respect to Gibbs distributions which are chosen to be sufficiently “close” to each other. In the latter case, the required number of expectations is of polynomial order with respect to grid size. We compare computational complexity by using both theoretical results and simulation experiments. We determine the most efficient E-type and P-type algorithms and conclude that P-type algorithms are more appropriate for partition function estimation. We finally suggest a practical and efficient P-type algorithm for this task