Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Entropic proximal mappings with applications to nonlinear programming
Mathematics of Operations Research
Polynomial-time approximation algorithms for the Ising model
SIAM Journal on Computing
On the convergence of the exponential multiplier method for convex programming
Mathematical Programming: Series A and B
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Marginal maximum a posteriori estimation using Markov chain Monte Carlo
Statistics and Computing
Mini-buckets: A general scheme for bounded inference
Journal of the ACM (JACM)
Understanding belief propagation and its generalizations
Exploring artificial intelligence in the new millennium
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Convergent Tree-Reweighted Message Passing for Energy Minimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Decision Analysis
A Linear Programming Approach to Max-Sum Problem: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Complexity results and approximation strategies for MAP explanations
Journal of Artificial Intelligence Research
Message-passing for Graph-structured Linear Programs: Proximal Methods and Rounding Schemes
The Journal of Machine Learning Research
Convergent message passing algorithms: a unifying view
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Join-graph propagation algorithms
Journal of Artificial Intelligence Research
Norm-product belief propagation: primal-dual message-passing for approximate inference
IEEE Transactions on Information Theory
Foundations and Trends® in Machine Learning
Tree-based reparameterization framework for analysis of sum-product and related algorithms
IEEE Transactions on Information Theory
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
A new class of upper bounds on the log partition function
IEEE Transactions on Information Theory
MAP estimation via agreement on trees: message-passing and linear programming
IEEE Transactions on Information Theory
New complexity results for MAP in Bayesian networks
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
The marginal maximum a posteriori probability (MAP) estimation problem, which calculates the mode of the marginal posterior distribution of a subset of variables with the remaining variables marginalized, is an important inference problem in many models, such as those with hidden variables or uncertain parameters. Unfortunately, marginal MAP can be NP-hard even on trees, and has attracted less attention in the literature compared to the joint MAP (maximization) and marginalization problems. We derive a general dual representation for marginal MAP that naturally integrates the marginalization and maximization operations into a joint variational optimization problem, making it possible to easily extend most or all variational-based algorithms to marginal MAP. In particular, we derive a set of "mixed-product" message passing algorithms for marginal MAP, whose form is a hybrid of max-product, sum-product and a novel "argmax-product" message updates. We also derive a class of convergent algorithms based on proximal point methods, including one that transforms the marginal MAP problem into a sequence of standard marginalization problems. Theoretically, we provide guarantees under which our algorithms give globally or locally optimal solutions, and provide novel upper bounds on the optimal objectives. Empirically, we demonstrate that our algorithms significantly outperform the existing approaches, including a state-of-the-art algorithm based on local search methods.