Scaling up the accuracy of Bayesian classifier based on frequent itemsets by m-estimate

  • Authors:
  • Jing Duan;Zhengkui Lin;Weiguo Yi;Mingyu Lu

  • Affiliations:
  • Information Science and Technology, Dalian Maritime University, Dalian, China;Information Science and Technology, Dalian Maritime University, Dalian, China;Information Science and Technology, Dalian Maritime University, Dalian, China;Information Science and Technology, Dalian Maritime University, Dalian, China

  • Venue:
  • AICI'10 Proceedings of the 2010 international conference on Artificial intelligence and computational intelligence: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Frequent Itemsets Mining Classifier (FISC) is an improved Bayesian classifier which averaging all classifiers built by frequent itemsets. Considering that in learning Bayesian network classifier, estimating probabilities from a given set of training examples is crucial, and it has been proved that m-estimate can scale up the accuracy of many Bayesian classifiers. Thus, a natural question is whether FISC with m-estimate can perform even better. Response to this problem, in this paper, we aim to scale up the accuracy of FISC by m-estimate and propose new probability estimation formulas. The experimental results show that the Laplace estimate used in the original FISC performs not very well and our m-estimate can greatly scale up the accuracy, it even outperforms other outstanding Bayesian classifiers used to compare.