Lazy learning
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Lazy Learning of Bayesian Rules
Machine Learning
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
SNNB: A Selective Neighborhood Based Naïve Bayes for Lazy Learning
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Tree Induction for Probability-Based Ranking
Machine Learning
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Instance cloning local naive bayes
AI'05 Proceedings of the 18th Canadian Society conference on Advances in Artificial Intelligence
Survey of Improving Naive Bayes for Classification
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
Hi-index | 0.00 |
Naive Bayes (simply NB) [12] has been widely used in machine learning and data mining as a simple and effective classification algorithm. Since its conditional independence assumption is rarely true, researchers have made a substantial amount of effort to improve naive Bayes. The related research work can be broadly divided into two approaches: eager learning and lazy learning, depending on when the major computation occurs. Different from eager approach, the key idea for extending naive Bayes from the lazy approach is to learn a naive Bayes for each testing example. In recent years, some lazy extensions of naive Bayes have been proposed. For example, SNNB [18], LWNB [7], and LBR [19]. All are aiming at improving the classification accuracy of naive Bayes. In many real-world machine learning and data mining applications, however, an accurate ranking is more desirable than an accurate classification. Responding to this fact, we present a lazy learning algorithm called instance greedily cloning naive Bayes (simply IGCNB) in this paper. Our motivation is to improve naive Bayes' ranking performance measured by AUC [4, 14]. We experimentally tested our algorithm, using the whole 36 UCI datasets recommended by Weka [1], and compared it to C4.4 [16], NB [12], SNNB [18] and LWNB [7]. The experimental results show that our algorithm outperforms all the other algorithms used to compare significantly in yielding accurate ranking.