Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
The theory and practice of Bayesian image labeling
International Journal of Computer Vision
Learning and relearning in Boltzmann machines
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Polynomial-time approximation algorithms for the Ising model
SIAM Journal on Computing
Approximation algorithms for the metric labeling problem via a new linear programming formulation
SODA '01 Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
What Energy Functions Can Be Minimized via Graph Cuts?
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part III
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Learning associative Markov networks
ICML '04 Proceedings of the twenty-first international conference on Machine learning
An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminative Learning of Markov Random Fields for Segmentation of 3D Scan Data
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Large Margin Methods for Structured and Interdependent Output Variables
The Journal of Machine Learning Research
Supervised clustering with support vector machines
ICML '05 Proceedings of the 22nd international conference on Machine learning
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Convergent Tree-Reweighted Message Passing for Energy Minimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Structured Prediction, Dual Extragradient and Bregman Projections
The Journal of Machine Learning Research
A Linear Programming Approach to Max-Sum Problem: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
The partial constraint satisfaction problem: Facets and lifting theorems
Operations Research Letters
Learning a two-stage SVM/CRF sequence classifier
Proceedings of the 17th ACM conference on Information and knowledge management
VC dimension and inner product space induced by Bayesian networks
International Journal of Approximate Reasoning
Incremental learning in the energy minimisation framework for interactive segmentation
Proceedings of the 32nd DAGM conference on Pattern recognition
On the properties of concept classes induced by multivalued Bayesian networks
Information Sciences: an International Journal
Hi-index | 0.00 |
The max-sum classifier predicts n-tuple of labels from n-tuple of observable variables by maximizing a sum of quality functions defined over neighbouring pairs of labels and observable variables. Predicting labels as MAP assignments of a Random Markov Field is a particular example of the max-sum classifier. Learning parameters of the max-sum classifier is a challenging problem because even computing the response of such classifier is NP-complete in general. Estimating parameters using the Maximum Likelihood approach is feasible only for a subclass of max-sum classifiers with an acyclic structure of neighbouring pairs. Recently, the discriminative methods represented by the perceptron and the Support Vector Machines, originally designed for binary linear classifiers, have been extended for learning some subclasses of the max-sum classifier. Besides the max-sum classifiers with the acyclic neighbouring structure, it has been shown that the discriminative learning is possible even with arbitrary neighbouring structure provided the quality functions fulfill some additional constraints. In this article, we extend the discriminative approach to other three classes of max-sum classifiers with an arbitrary neighbourhood structure. We derive learning algorithms for two subclasses of max-sum classifiers whose response can be computed in polynomial time: (i) the max-sum classifiers with supermodular quality functions and (ii) the max-sum classifiers whose response can be computed exactly by a linear programming relaxation. Moreover, we show that the learning problem can be approximately solved even for a general max-sum classifier.