A Comparative Analysis of Methods for Pruning Decision Trees
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Information Sciences: an International Journal
Induction of multiple fuzzy decision trees based on rough set technique
Information Sciences: an International Journal
Decision forests with oblique decision trees
MICAI'06 Proceedings of the 5th Mexican international conference on Artificial Intelligence
Hi-index | 12.05 |
One of the most challenging problems in data mining is to develop scalable algorithms capable of mining massive data sets. A novel decision forest learning algorithm named FDF is proposed in this paper to represent multi-level semantic knowledge of the relationship between the data and information implicated. FDF provides their users with just a single set of rules by redefining information gain of information theory, then each tree in the decision forest is constructed in the down-top learning framework, and the number of trees and stopping criteria can be set automatically. When no existing tree match test samples, FDF will build new logical rules for this and thus realize scalable construction process. Empirical studies on a set of natural domains show that decision forest has clear advantages with respect to probabilistic performance.