Hybrid Bayesian estimation tree learning with discrete and fuzzy labels

  • Authors:
  • Zengchang Qin;Tao Wan

  • Affiliations:
  • Intelligent Computing and Machine Learning Lab, School of Automation Science and Electrical Engineering, Beihang University, Beijing, China 100191;School of Biological Science and Medical Engineering, Beihang University, Beijing, China 100191 and Department of Biomedical Engineering, Case Western Reserve University, Cleveland, USA 44106

  • Venue:
  • Frontiers of Computer Science: Selected Publications from Chinese Universities
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a better estimation on class probability. In order to get a good probability estimation, we usually need large trees which are not desirable with respect to model transparency. Linguistic decision tree (LDT) is a PET model based on label semantics. Fuzzy labels are used for building the tree and each branch is associated with a probability distribution over classes. If there is no overlap between neighboring fuzzy labels, these fuzzy labels then become discrete labels and a LDT with discrete labels becomes a special case of the PET model. In this paper, two hybrid models by combining the naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model uses naive Bayes estimation given a PET, and the second model uses a set of small-sized PETs as estimators by assuming the independence between these trees. Empirical studies on discrete and fuzzy labels show that the first model outperforms the PET model at shallow depth, and the second model is equivalent to the naive Bayes and PET.