On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Applying general Bayesian techniques to improve TAN induction
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
An optimal minimum spanning tree algorithm
Journal of the ACM (JACM)
Bayes Optimal Instance-Based Learning
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Tractable Bayesian Learning of Tree Belief Networks
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
On the classification performance of TAN and general Bayesian networks
Knowledge-Based Systems
Linking Bayesian networks and PLS path modeling for causal analysis
Expert Systems with Applications: An International Journal
ACIIDS'11 Proceedings of the Third international conference on Intelligent information and database systems - Volume Part II
Bayesian learning with mixtures of trees
ECML'06 Proceedings of the 17th European conference on Machine Learning
Robust bayesian linear classifier ensembles
ECML'05 Proceedings of the 16th European conference on Machine Learning
Alleviating naive Bayes attribute independence assumption by attribute weighting
The Journal of Machine Learning Research
Hi-index | 0.00 |
In this paper we present several Bayesian algorithms for learning Tree Augmented Naive Bayes (TAN) models. We extend the results in Meila & Jaakkola (2000a) to TANs by proving that accepting a prior decomposable distribution over TAN's, we can compute the exact Bayesian model averaging over TAN structures and parameters in polynomial time. Furthermore, we prove that the k-maximum a posteriori (MAP) TAN structures can also be computed in polynomial time. We use these results to correct minor errors in Meila & Jaakkola (2000a) and to construct several TAN based classifiers. We show that these classifiers provide consistently better predictions over Irvine datasets and artificially generated data than TAN based classifiers proposed in the literature.