Minority game data mining for stock market predictions
ADMI'10 Proceedings of the 6th international conference on Agents and data mining interaction
Behavior learning in minority games
CARE@AI'09/CARE@IAT'10 Proceedings of the CARE@AI 2009 and CARE@IAT 2010 international conference on Collaborative agents - research and development
DCPE co-training for classification
Neurocomputing
Hybrid Bayesian estimation tree learning with discrete and fuzzy labels
Frontiers of Computer Science: Selected Publications from Chinese Universities
Hi-index | 0.00 |
Tree induction is one of the most effective and widely used models in classification. Unfortunately, decision trees such as C4.5 [9] have been found to provide poor probability estimates. By the empirical studies, Provost and Domingos [6] found that Probability Estimation Trees (PETs) give a fairly good probability estimation. However, different from normal decision trees, pruning reduces the performances of PETs. In order to get a good probability estimation, we usually need large trees which are not good in terms of the model transparency. In this paper, two hybrid models by combining the Naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model use Naive Bayes estimation given a PET and the second model use a group of small-sized PETs as Naive Bayes estimators. Empirical studies show that the first model outperforms the PET model at shallow depth and the second model is equivalent to Naive Bayes and PET.