Moving towards efficient decision tree construction
Information Sciences: an International Journal
A new node splitting measure for decision tree construction
Pattern Recognition
Learning random forests for ranking
Frontiers of Computer Science in China
Towards the automatic design of decision tree induction algorithms
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Variable precision rough set based decision tree classifier
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology - Hybrid approaches for approximate reasoning
Hi-index | 0.00 |
Decision tree learning is one of the most widely used and practical methods for inductive inference. A fundamental issue in it is the attribute selection measure. The informa- tion gain measure is the most popular one for addressing this issue. However, a notable disadvantage of it is that it is biased towards selecting attributes with many values. Motivated by this fact, the gain ratio measure penalizes the attributes with many values by incorporating a term called split information. Unfortunately, the gain ratio measure suf- fers from another inevitable practical issue that the denom- inator sometimes is zero or very small. In this paper, we single out an improved attribute selection measure called average gain, which penalizes the attributes with many val- ues by dividing the number of attribute values. We experi- mentally tested its effectiveness using 36 UCI data sets.