Mathematical Programming: Series A and B
Smooth minimization of non-smooth functions
Mathematical Programming: Series A and B
Convergent Tree-Reweighted Message Passing for Energy Minimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear Programming Relaxations and Belief Propagation -- An Empirical Study
The Journal of Machine Learning Research
A Linear Programming Approach to Max-Sum Problem: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient projections onto the l1-ball for learning in high dimensions
Proceedings of the 25th international conference on Machine learning
Proceedings of the 25th international conference on Machine learning
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Convex relaxation methods for graphical models: lagrangian and maximum entropy approaches
Convex relaxation methods for graphical models: lagrangian and maximum entropy approaches
Softmax-margin CRFs: training log-linear models with cost functions
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Fast image recovery using variable splitting and constrained optimization
IEEE Transactions on Image Processing
On dual decomposition and linear programming relaxations for natural language processing
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Norm-product belief propagation: primal-dual message-passing for approximate inference
IEEE Transactions on Information Theory
MRF Energy Minimization and Beyond via Dual Decomposition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dual decomposition with many overlapping components
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Journal of Artificial Intelligence Research
Proceedings of the 2nd International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications
DARWIN: a framework for machine learning and computer vision research and development
The Journal of Machine Learning Research
Hi-index | 0.00 |
Maximum a-posteriori (MAP) estimation is an important task in many applications of probabilistic graphical models. Although finding an exact solution is generally intractable, approximations based on linear programming (LP) relaxation often provide good approximate solutions. In this paper we present an algorithm for solving the LP relaxation optimization problem. In order to overcome the lack of strict convexity, we apply an augmented Lagrangian method to the dual LP. The algorithm, based on the alternating direction method of multipliers (ADMM), is guaranteed to converge to the global optimum of the LP relaxation objective. Our experimental results show that this algorithm is competitive with other state-of-the-art algorithms for approximate MAP estimation.