A New Incremental Algorithm for Induction of Multivariate Decision Trees for Large Datasets
IDEAL '08 Proceedings of the 9th International Conference on Intelligent Data Engineering and Automated Learning
A New Cluster Based Fuzzy Model Tree for Data Modeling
RSFDGrC '07 Proceedings of the 11th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing
An Improved Cluster Oriented Fuzzy Decision Trees
RSFDGrC '09 Proceedings of the 12th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing
Why fuzzy decision trees are good rankers
IEEE Transactions on Fuzzy Systems
Fuzzy sets in machine learning and data mining
Applied Soft Computing
Forecasting shanghai composite index based on fuzzy time series and improved C-fuzzy decision trees
Expert Systems with Applications: An International Journal
Induced states in a decision tree constructed by Q-learning
Information Sciences: an International Journal
Fuzzy machine learning and data mininga
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
Extraction of fuzzy rules from fuzzy decision trees: An axiomatic fuzzy sets (AFS) approach
Data & Knowledge Engineering
Building fast decision trees from large training sets
Intelligent Data Analysis
Hi-index | 0.00 |
This paper introduces a concept and design of decision trees based on information granules - multivariable entities characterized by high homogeneity (low variability). As such granules are developed via fuzzy clustering and play a pivotal role in the growth of the decision trees, they will be referred to as C-fuzzy decision trees. In contrast with "standard" decision trees in which one variable (feature) is considered at a time, this form of decision trees involves all variables that are considered at each node of the tree. Obviously, this gives rise to a completely new geometry of the partition of the feature space that is quite different from the guillotine cuts implemented by standard decision trees. The growth of the C-decision tree is realized by expanding a node of tree characterized by the highest variability of the information granule residing there. This paper shows how the tree is grown depending on some additional node expansion criteria such as cardinality (number of data) at a given node and a level of structural dependencies (structurability) of data existing there. A series of experiments is reported using both synthetic and machine learning data sets. The results are compared with those produced by the "standard" version of the decision tree (namely, C4.5).