A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning
On Estimating Probabilities in Tree Pruning
EWSL '91 Proceedings of the European Working Session on Machine Learning
Why Discretization Works for Naive Bayesian Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Inference for the Generalization Error
Machine Learning
An introduction to variable and feature selection
The Journal of Machine Learning Research
Large-Sample Learning of Bayesian Networks is NP-Hard
The Journal of Machine Learning Research
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
An analysis of diversity measures
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Weightily averaged one-dependence estimators
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Hi-index | 0.00 |
Naive Bayes (NB) is a simple Bayesian classifier that assumes the conditional independence and augmented NB (ANB) models are extensions of NB by relaxing the independence assumption. The averaged one-dependence estimators (AODE) is a classifier that averages ODEs, which are ANB models. However, the expressiveness of AODE is still limited by the restricted structure of ODE. In this paper, we propose a model averaging method for NB Trees (NBTs) with flexible structures and present experimental results in terms of classification accuracy. Results of comparative experiments show that our proposed method outperforms AODE on classification accuracy.