Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
C4.5: programs for machine learning
C4.5: programs for machine learning
Lazy learning
Machine Learning - Special issue on learning with probabilistic representations
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Lazy Learning of Bayesian Rules
Machine Learning
SNNB: A Selective Neighborhood Based Naïve Bayes for Lazy Learning
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Induction of selective Bayesian classifiers
UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Instance cloning local naive bayes
AI'05 Proceedings of the 18th Canadian Society conference on Advances in Artificial Intelligence
One Dependence Value Difference Metric
Knowledge-Based Systems
Boosting for superparent-one-dependence estimators
International Journal of Computing Science and Mathematics
Learning attribute weighted AODE for ROC area ranking
International Journal of Information and Communication Technology
Hi-index | 0.00 |
Naive Bayes is a probability-based classification model based on the conditional independence assumption. In many real-world applications, however, this assumption is often violated. Responding to this fact, researchers have made a substantial amount of effort to improve the accuracy of naive Bayes by weakening the conditional independence assumption. The most recent work is the Averaged One-Dependence Estimators (AODE) [15] that demonstrates good classification performance. In this paper, we propose a novel lazy learning algorithm Lazy Averaged One-Dependence Estimators, simply LAODE, by extending AODE. For a given test instance, LAODE firstly expands the training data by adding some copies (clones) of each training instance according to its similarity to the test instance, and then uses the expanded training data to build an AODE classifier to classify the test instance. We experimentally test our algorithm in Weka system [16], using the whole 36 UCI data sets [11] recommended by Weka [17], and compare it to naive Bayes [3], AODE [15], and LBR [19]. The experimental results show that LAODE significantly outperforms all the other algorithms used to compare.