Classifier Ensembles with a Random Linear Oracle
IEEE Transactions on Knowledge and Data Engineering
Decision trees using model ensemble-based nodes
Pattern Recognition
Moving towards efficient decision tree construction
Information Sciences: an International Journal
Domains of Competence of Artificial Neural Networks Using Measures of Separability of Classes
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Information Sciences: an International Journal
Linear separability and classification complexity
Expert Systems with Applications: An International Journal
Model selection in omnivariate decision trees using Structural Risk Minimization
Information Sciences: an International Journal
Analysis of data complexity measures for classification
Expert Systems with Applications: An International Journal
Variable precision rough set based decision tree classifier
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology - Hybrid approaches for approximate reasoning
Hi-index | 0.01 |
Top-down induction of decision trees is a simple and powerful method of pattern classification. In a decision tree, each node partitions the available patterns into two or more sets. New nodes are created to handle each of the resulting partitions and the process continues. A node is considered terminal if it satisfies some stopping criteria (for example, purity, i.e., all patterns at the node are from a single class). Decision trees may be univariate, linear multivariate, or nonlinear multivariate depending on whether a single attribute, a linear function of all the attributes, or a nonlinear function of all the attributes is used for the partitioning at each node of the decision tree. Though nonlinear multivariate decision trees are the most powerful, they are more susceptible to the risks of overfitting. In this paper, we propose to perform model selection at each decision node to build omnivariate decision trees. The model selection is done using a novel classifiability measure that captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of the subproblem at each node. The proposed approach is fast and does not suffer from as high a computational burden as that incurred by typical model selection algorithms. Empirical results over 26 data sets indicate that our approach is faster and achieves better classification accuracy compared to statistical model select algorithms.