Naive Bayes Classification Given Probability Estimation Trees

  • Authors:
  • Zengchang Qin

  • Affiliations:
  • University of California, Berkeley, USA

  • Venue:
  • ICMLA '06 Proceedings of the 5th International Conference on Machine Learning and Applications
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tree induction is one of the most effective and widely used models in classification. Unfortunately, decision trees such as C4.5 [9] have been found to provide poor probability estimates. By the empirical studies, Provost and Domingos [6] found that Probability Estimation Trees (PETs) give a fairly good probability estimation. However, different from normal decision trees, pruning reduces the performances of PETs. In order to get a good probability estimation, we usually need large trees which are not good in terms of the model transparency. In this paper, two hybrid models by combining the Naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model use Naive Bayes estimation given a PET and the second model use a group of small-sized PETs as Naive Bayes estimators. Empirical studies show that the first model outperforms the PET model at shallow depth and the second model is equivalent to Naive Bayes and PET.