Instance cloning local naive bayes

  • Authors:
  • Liangxiao Jiang;Harry Zhang;Jiang Su

  • Affiliations:
  • Faculty of Computer Science, China University of Geosciences, Wuhan, China;Faculty of Computer Science, University of New Brunswick, Fredericton, NB, Canada;Faculty of Computer Science, University of New Brunswick, Fredericton, NB, Canada

  • Venue:
  • AI'05 Proceedings of the 18th Canadian Society conference on Advances in Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The instance-based k-nearest neighbor algorithm (KNN)[1] is an effective classification model Its classification is simply based on a vote within the neighborhood, consisting of k nearest neighbors of the test instance Recently, researchers have been interested in deploying a more sophisticated local model, such as naive Bayes, within the neighborhood It is expected that there are no strong dependences within the neighborhood of the test instance, thus alleviating the conditional independence assumption of naive Bayes Generally, the smaller size of the neighborhood (the value of k), the less chance of encountering strong dependences When k is small, however, the training data for the local naive Bayes is small and its classification would be inaccurate In the currently existing models, such as LWNB [3], a relatively large k is chosen The consequence is that strong dependences seem unavoidable. In our opinion, a small k should be preferred in order to avoid strong dependences We propose to deal with the problem of lack of local training data using sampling (cloning) Given a test instance, clones of each instance in the neighborhood is generated in terms of its similarity to the test instance and added to the local training data Then, the local naive Bayes is trained from the expanded training data Since a relatively small k is chosen, the chance of encountering strong dependences within the neighborhood is small Thus the classification of the resulting local naive Bayes would be more accurate We experimentally compare our new algorithm with KNN and its improved variants in terms of classification accuracy, using the 36 UCI datasets recommended by Weka [8], and the experimental results show that our algorithm outperforms all those algorithms significantly and consistently at various k values.