Combinatorial optimization: algorithms and complexity
Combinatorial optimization: algorithms and complexity
Enhanced hypertext categorization using hyperlinks
SIGMOD '98 Proceedings of the 1998 ACM SIGMOD international conference on Management of data
Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Discrete Applied Mathematics
Understanding belief propagation and its generalizations
Exploring artificial intelligence in the new millennium
What Energy Functions Can Be Minimizedvia Graph Cuts?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning associative Markov networks
ICML '04 Proceedings of the twenty-first international conference on Machine learning
An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
A New Framework for Approximate Labeling via Graph Cuts
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Convergent Tree-Reweighted Message Passing for Energy Minimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Collective information extraction with relational Markov networks
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
An effective two-stage model for exploiting non-local dependencies in named entity recognition
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Minimizing Nonsubmodular Functions with Graph Cuts-A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Approximate Labeling via Graph Cuts Based on Linear Programming
IEEE Transactions on Pattern Analysis and Machine Intelligence
Solving multiclass support vector machines with LaRank
Proceedings of the 24th international conference on Machine learning
Efficient inference with cardinality-based clique potentials
Proceedings of the 24th international conference on Machine learning
Simple, robust, scalable semi-supervised learning via expectation regularization
Proceedings of the 24th international conference on Machine learning
On partial optimality in multi-label MRFs
Proceedings of the 25th international conference on Machine learning
Efficiently solving convex relaxations for MAP estimation
Proceedings of the 25th international conference on Machine learning
The Journal of Machine Learning Research
Computer Vision and Image Understanding
Efficient belief propagation for higher-order cliques using linear constraint nodes
Computer Vision and Image Understanding
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
Robust Higher Order Potentials for Enforcing Label Consistency
International Journal of Computer Vision
Collective annotation of Wikipedia entities in web text
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs
The Journal of Machine Learning Research
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Lifted probabilistic inference with counting formulas
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Answering table augmentation queries from unstructured lists on the web
Proceedings of the VLDB Endowment
Harvesting relational tables from lists on the web
Proceedings of the VLDB Endowment
Coupled semi-supervised learning for information extraction
Proceedings of the third ACM international conference on Web search and data mining
Message-passing for Graph-structured Linear Programs: Proximal Methods and Rounding Schemes
The Journal of Machine Learning Research
Convergent message passing algorithms: a unifying view
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning with scope, with application to information extraction and classification
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
A comparative study of energy minimization methods for markov random fields
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
Improved parsing and POS tagging using inter-sentence consistency constraints
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
Many structured information extraction tasks employ collective graphical models that capture inter-instance associativity by coupling them with various clique potentials. We propose tractable families of such potentials that are invariant under permutations of their arguments, and call them symmetric clique potentials. We present three families of symmetric potentials---MAX, SUM, and MAJORITY. We propose cluster message passing for collective inference with symmetric clique potentials, and present message computation algorithms tailored to such potentials. Our first message computation algorithm, called α-pass, is sub-quadratic in the clique size, outputs exact messages for MAX, and computes 13/15-approximate messages for Potts, a popular member of the SUM family. Empirically, it is upto two orders of magnitude faster than existing algorithms based on graph-cuts or belief propagation. Our second algorithm, based on Lagrangian relaxation, operates on MAJORITY potentials and provides close to exact solutions while being two orders of magnitude faster. We show that the cluster message passing framework is more principled, accurate and converges faster than competing approaches. We extend our collective inference framework to exploit associativity of more general intra-domain properties of instance labelings, which opens up interesting applications in domain adaptation. Our approach leads to significant error reduction on unseen domains without incurring any overhead of model retraining.