Feature selection and classification model construction on type 2 diabetic patients' data
Artificial Intelligence in Medicine
An optimization of ReliefF for classification in large datasets
Data & Knowledge Engineering
Hi-index | 0.00 |
It has been asserted that, using traditional pruning methods, growing decision trees with increasingly larger amounts of training data will result in larger tree sizes even when accuracy does not increase. With regard to error-based pruning, the experimental data used to illustrate this assertion have apparently been obtained using the default setting for pruning strength; in particular, using the default certainty factor of 25 in the C4.5 decision tree implementation. We show that, in general, an appropriate setting of the certainty factor for error-based pruning will cause decision tree size to plateau when accuracy is not increasing with more training data.