Lazy averaged one-dependence estimators

  • Authors:
  • Liangxiao Jiang;Harry Zhang

  • Affiliations:
  • Faculty of Computer Science, China University of Geosciences, Wuhan, Hubei, P.R.China;Faculty of Computer Science, University of New Brunswick, Fredericton, NB, Canada

  • Venue:
  • AI'06 Proceedings of the 19th international conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Naive Bayes is a probability-based classification model based on the conditional independence assumption. In many real-world applications, however, this assumption is often violated. Responding to this fact, researchers have made a substantial amount of effort to improve the accuracy of naive Bayes by weakening the conditional independence assumption. The most recent work is the Averaged One-Dependence Estimators (AODE) [15] that demonstrates good classification performance. In this paper, we propose a novel lazy learning algorithm Lazy Averaged One-Dependence Estimators, simply LAODE, by extending AODE. For a given test instance, LAODE firstly expands the training data by adding some copies (clones) of each training instance according to its similarity to the test instance, and then uses the expanded training data to build an AODE classifier to classify the test instance. We experimentally test our algorithm in Weka system [16], using the whole 36 UCI data sets [11] recommended by Weka [17], and compare it to naive Bayes [3], AODE [15], and LBR [19]. The experimental results show that LAODE significantly outperforms all the other algorithms used to compare.