Theory refinement on Bayesian networks
Proceedings of the seventh conference (1991) on Uncertainty in artificial intelligence
Machine Learning - Special issue on learning with probabilistic representations
Learning Bayesian networks from data: an information-theory based approach
Artificial Intelligence
Optimal structure identification with greedy search
The Journal of Machine Learning Research
The representational power of discrete bayesian networks
The Journal of Machine Learning Research
Estimating replicability of classifier learning experiments
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Bayesian network classifiers by maximizing conditional likelihood
ICML '04 Proceedings of the twenty-first international conference on Machine learning
TAN Classifiers Based on Decomposable Distributions
Machine Learning
On the incompatibility of faithfulness and monotone DAG faithfulness
Artificial Intelligence
A tree augmented classifier based on Extreme Imprecise Dirichlet Model
International Journal of Approximate Reasoning
NB+: An improved Naïve Bayesian algorithm
Knowledge-Based Systems
One Dependence Value Difference Metric
Knowledge-Based Systems
ACIIDS'11 Proceedings of the Third international conference on Intelligent information and database systems - Volume Part II
Improving Tree augmented Naive Bayes for class probability estimation
Knowledge-Based Systems
Learning Bayesian network classifiers by risk minimization
International Journal of Approximate Reasoning
Expert Systems with Applications: An International Journal
A search problem in complex diagnostic Bayesian networks
Knowledge-Based Systems
A bayesian network approach to investigating user-robot personality matching
AMT'12 Proceedings of the 8th international conference on Active Media Technology
A random forest classifier for lymph diseases
Computer Methods and Programs in Biomedicine
Hi-index | 0.00 |
Over a decade ago, Friedman et al. introduced the Tree Augmented Naive Bayes (TAN) classifier, with experiments indicating that it significantly outperformed Naive Bayes (NB) in terms of classification accuracy, whereas general Bayesian network (GBN) classifiers performed no better than NB. This paper challenges those claims, using a careful experimental analysis to show that GBN classifiers significantly outperform NB on datasets analyzed, and are comparable to TAN performance. It is found that the poor performance reported by Friedman et al. are not attributable to the GBN per se, but rather to their use of simple empirical frequencies to estimate GBN parameters, whereas basic parameter smoothing (used in their TAN analyses but not their GBN analyses) improves GBN performance significantly. It is concluded that, while GBN classifiers may have some limitations, they deserve greater attention, particularly in domains where insight into classification decisions, as well as good accuracy, is required.