Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Machine Learning - Special issue on learning with probabilistic representations
Classifier Learning with Supervised Marginal Likelihood
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Eighteenth national conference on Artificial intelligence
Models and selection criteria for regression and classification
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Classification using Hierarchical Naïve Bayes models
Machine Learning
Discriminative learning of Bayesian network classifiers
AIAP'07 Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications
Broad phonetic classification using discriminative Bayesian networks
Speech Communication
Latent classification models for binary data
Pattern Recognition
Hi-index | 0.00 |
Bayesian network models are widely used for discriminative prediction tasks such as classification. Usually their parameters are determined using 'unsupervised' methods such as maximization of the joint likelihood. The reason is often that it is unclear how to find the parameters maximizing the conditional (supervised) likelihood. We show how the discriminative learning problem can be solved efficiently for a large class of Bayesian network models, including the Naive Bayes (NB) and tree-augmented Naive Bayes (TAN) models. We do this by showing that under a certain general condition on the network structure, the discriminative learning problem is exactly equivalent to logistic regression with unconstrained convex parameter spaces. Hitherto this was known only for Naive Bayes models. Since logistic regression models have a concave log-likelihood surface, the global maximum can be easily found by local optimization methods.