Instance-Based Learning Algorithms
Machine Learning
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Lazy Learning of Bayesian Rules
Machine Learning
SNNB: A Selective Neighborhood Based Naïve Bayes for Lazy Learning
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Learning when training data are costly: the effect of class distribution on tree induction
Journal of Artificial Intelligence Research
An analysis of Bayesian classifiers
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Learning Instance Greedily Cloning Naive Bayes for Ranking
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Survey of Improving Naive Bayes for Classification
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
Learning decision tree for ranking
Knowledge and Information Systems
Nearest neighbour group-based classification
Pattern Recognition
Lazy averaged one-dependence estimators
AI'06 Proceedings of the 19th international conference on Advances in Artificial Intelligence: Canadian Society for Computational Studies of Intelligence
Hi-index | 0.00 |
The instance-based k-nearest neighbor algorithm (KNN)[1] is an effective classification model Its classification is simply based on a vote within the neighborhood, consisting of k nearest neighbors of the test instance Recently, researchers have been interested in deploying a more sophisticated local model, such as naive Bayes, within the neighborhood It is expected that there are no strong dependences within the neighborhood of the test instance, thus alleviating the conditional independence assumption of naive Bayes Generally, the smaller size of the neighborhood (the value of k), the less chance of encountering strong dependences When k is small, however, the training data for the local naive Bayes is small and its classification would be inaccurate In the currently existing models, such as LWNB [3], a relatively large k is chosen The consequence is that strong dependences seem unavoidable. In our opinion, a small k should be preferred in order to avoid strong dependences We propose to deal with the problem of lack of local training data using sampling (cloning) Given a test instance, clones of each instance in the neighborhood is generated in terms of its similarity to the test instance and added to the local training data Then, the local naive Bayes is trained from the expanded training data Since a relatively small k is chosen, the chance of encountering strong dependences within the neighborhood is small Thus the classification of the resulting local naive Bayes would be more accurate We experimentally compare our new algorithm with KNN and its improved variants in terms of classification accuracy, using the 36 UCI datasets recommended by Weka [8], and the experimental results show that our algorithm outperforms all those algorithms significantly and consistently at various k values.