Information-Based Evaluation Criterion for Classifier's Performance
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Elements of machine learning
Machine Learning
Wrappers for performance enhancement and oblivious decision graphs
Wrappers for performance enhancement and oblivious decision graphs
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Clustering Algorithms
Machine Learning
Unbiased assessment of learning algorithms
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Noise-tolerant conceptual clustering
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
In data analysis, induction of decision trees serves two main goals: first, induced decision trees can be used for classification /prediction of new instances, and second, they represent an easy-to-interpret model of the problem domain that can be used for explanation. The accuracy of the induced classifier is usually estimated using N-fold cross validation, whereas for explanation purposes a decision tree induced from all the available data is used. Decision tree learning is relatively non-robust: a small change in the training set may significantly change the structure of the induced decision tree. This paper presents a decision tree construction method in which the domain model is constructed by consensus clustering of N decision trees induced in N-fold cross-validation. Experimental results show that consensus decision trees are simpler than C4.5 decision trees, indicating that they may be a more stable approximation of the intended domain model than decision tree, constructed from the entire set of training instances.