Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting
The Journal of Machine Learning Research
A Linear Programming Approach to Max-Sum Problem: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Group-Lasso for generalized linear models: uniqueness of solutions and efficient algorithms
Proceedings of the 25th international conference on Machine learning
On the quantitative analysis of deep belief networks
Proceedings of the 25th international conference on Machine learning
New closed-form bounds on the partition function
Machine Learning
Efficient belief propagation for higher-order cliques using linear constraint nodes
Computer Vision and Image Understanding
A Probabilistic Segmentation Scheme
Proceedings of the 30th DAGM symposium on Pattern Recognition
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Journal of Artificial Intelligence Research
Large margin Boltzmann machines
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Entropy bounds for a Markov random subfield
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 1
Discrete temporal models of social networks
ICML'06 Proceedings of the 2006 conference on Statistical network analysis
Approximate inference on planar graphs using loop calculus and belief propagation
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Convergent message passing algorithms: a unifying view
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Convexifying the Bethe free energy
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Approximate Inference on Planar Graphs using Loop Calculus and Belief Propagation
The Journal of Machine Learning Research
FastInf: An Efficient Approximate Inference Library
The Journal of Machine Learning Research
IEEE Transactions on Information Theory
On dual decomposition and linear programming relaxations for natural language processing
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Turbo parsers: dependency parsing by approximate variational inference
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
The semi-explicit shape model for multi-object detection and classification
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
Review article: Edge and line oriented contour detection: State of the art
Image and Vision Computing
Norm-product belief propagation: primal-dual message-passing for approximate inference
IEEE Transactions on Information Theory
Active tuples-based scheme for bounding posterior beliefs
Journal of Artificial Intelligence Research
Adaptive Exact Inference in Graphical Models
The Journal of Machine Learning Research
Recursive sum-product algorithm for generalized outer-planar graphs
Information Processing Letters
Fast Structured Prediction Using Large Margin Sigmoid Belief Networks
International Journal of Computer Vision
Variational algorithms for marginal MAP
The Journal of Machine Learning Research
Decomposition and Approximation of Loopy Bayesian Networks
Fundamenta Informaticae
Hi-index | 754.96 |
We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.