Learning Lazy Naive Bayesian Classifiers for Ranking

  • Authors:
  • Liangxiao Jiang;Yuanyuan Guo

  • Affiliations:
  • China University of Geosciences;China University of Geosciences

  • Venue:
  • ICTAI '05 Proceedings of the 17th IEEE International Conference on Tools with Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Naive Bayes (simply NB) has been well-known as an effective and efficient classification algorithm. However, it is based on the conditional independence assumption that it is often violated in applications. In addition, in many real-world data mining applications, however, an accurate ranking of instances is often required rather than an accurate classification. For example, a ranking of customers in terms of the likelihood that they buy one's products is useful in direct marketing. In this paper, we firstly investigate the ranking performance of some lazy learning algorithms for extending naive Bayes. The ranking performance is measured by AUC [9, 5]. We observe that they can not significantly improve naive Bayes' ranking performance. Motivated by this fact and aiming at improving naive Bayes with accurate ranking, we present a new lazy learning algorithm, called lazy naive Bayes (simply LNB), to extend naive Bayes for ranking. We experimentally tested our algorithm, using the whole 36 UCI data sets [4] recommended by Weka [1], and compared it to NB and C4.4 [11] measured by AUC. The experimental results show that our algorithm significantly outperforms both NB and C4.4.