Improving the readability of decision trees using reduced complexity feature extraction
IEA/AIE'2005 Proceedings of the 18th international conference on Innovations in Applied Artificial Intelligence
Engineering Applications of Artificial Intelligence
Classifier Ensembles with a Random Linear Oracle
IEEE Transactions on Knowledge and Data Engineering
Decision trees using model ensemble-based nodes
Pattern Recognition
AG-ART: An adaptive approach to evolving ART architectures
Neurocomputing
FUZZ-IEEE'09 Proceedings of the 18th international conference on Fuzzy Systems
Model selection in omnivariate decision trees
ECML'05 Proceedings of the 16th European conference on Machine Learning
Model selection in omnivariate decision trees using Structural Risk Minimization
Information Sciences: an International Journal
Hand Gesture Recognition Using Multivariate Fuzzy Decision Tree and User Adaptation
International Journal of Fuzzy System Applications
Decision trees: a recent overview
Artificial Intelligence Review
Incorporating linear discriminant analysis in neural tree for multidimensional splitting
Applied Soft Computing
On the feature extraction in discrete space
Pattern Recognition
Hi-index | 0.00 |
Univariate decision trees at each decision node consider the value of only one feature leading to axis-aligned splits. In a linear multivariate decision tree, each decision node divides the input space into two with a hyperplane. In a nonlinear multivariate tree, a multilayer perceptron at each node divides the input space arbitrarily, at the expense of increased complexity and higher risk of overfitting. We propose omnivariate trees where the decision node may be univariate, linear, or nonlinear depending on the outcome of comparative statistical tests on accuracy thus matching automatically the complexity of the node with the subproblem defined by the data reaching that node. Such an architecture frees the designer from choosing the appropriate node type, doing model selection automatically at each node. Our simulation results indicate that such a decision tree induction method generalizes better than trees with the same types of nodes everywhere and induces small trees