Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Machine Learning - Special issue on learning with probabilistic representations
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Maximum Likelihood Bounded Tree-Width Markov Networks
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Learning with mixtures of trees
The Journal of Machine Learning Research
Learning Bayesian network classifiers by maximizing conditional likelihood
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Efficient discriminative learning of Bayesian network classifier via boosted augmented naive Bayes
ICML '05 Proceedings of the 22nd international conference on Machine learning
Discriminative versus generative parameter and structure learning of Bayesian network classifiers
ICML '05 Proceedings of the 22nd international conference on Machine learning
Estimation of Distribution Algorithms with Kikuchi Approximations
Evolutionary Computation
On discriminative joint density modeling
ECML'05 Proceedings of the 16th European conference on Machine Learning
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We propose a simple and efficient approach to building undirected probabilistic classification models (Markov networks) that extend naïve Bayes classifiers and outperform existing directed probabilistic classifiers (Bayesian networks) of similar complexity. Our Markov network model is represented as a set of consistent probability distributions on subsets of variables. Inference with such a model can be done efficiently in closed form for problems like class probability estimation. We also propose a highly efficient Bayesian structure learning algorithm for conditional prediction problems, based on integrating along a hill-climb in the structure space. Our prior based on the degrees of freedom effectively prevents overfitting.