Scaling Up the Accuracy of Bayesian Network Classifiers by M-Estimate

  • Authors:
  • Liangxiao Jiang;Dianhong Wang;Zhihua Cai

  • Affiliations:
  • Faculty of Computer Science, China University of Geosciences, Wuhan, Hubei, 430074, P.R. China;Faculty of Electronic Engineering, China University of Geosciences, Wuhan, Hubei, 430074, P.R. China;Faculty of Computer Science, China University of Geosciences, Wuhan, Hubei, 430074, P.R. China

  • Venue:
  • ICIC '07 Proceedings of the 3rd International Conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In learning Bayesian network classifiers, estimating probabilities from a given set of training examples is crucial. In many cases, we can estimate probabilities by the fraction of times the events is observed to occur over the total number of opportunities. However, when the training examples are not enough, this probability estimation method inevitably suffers from the zero-frequency problem. To avoid this practical problem, Laplace estimate is usually used to estimate probabilities. Just as we all know, m-estimate is another probability estimation method. Thus, a natural question is whether a Bayesian network classifier with m-estimate can perform even better. Responding to this question, we single out a special m-estimate method and empirically investigate its effect on various Bayesian network classifiers, such as Naive Bayes (NB), Tree Augmented Naive Bayes (TAN), Averaged One-Dependence Estimators (AODE), and Hidden Naive Bayes (HNB). Our experiments show that the classifiers with our m-estimate perform better than the ones with Laplace estimate.