Leave-One-Out Procedures for Nonparametric Error Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
C4.5: programs for machine learning
C4.5: programs for machine learning
Wrappers for performance enhancement and oblivious decision graphs
Wrappers for performance enhancement and oblivious decision graphs
Inference for the Generalization Error
Machine Learning
No Unbiased Estimator of the Variance of K-Fold Cross-Validation
The Journal of Machine Learning Research
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Sensitivity Analysis of k-Fold Cross Validation in Prediction Error Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Distribution-free performance bounds with the resubstitution error estimate (Corresp.)
IEEE Transactions on Information Theory
Hi-index | 0.01 |
Estimating the prediction error of classifiers induced by supervised learning algorithms is important not only to predict its future error, but also to choose a classifier from a given set (model selection). If the goal is to estimate the prediction error of a particular classifier, the desired estimator should have low bias and low variance. However, if the goal is the model selection, in order to make fair comparisons the chosen estimator should have low variance assuming that the bias term is independent from the considered classifier. This paper follows the analysis proposed in [1] about the statistical properties of k-fold cross-validation estimators and extends it to the most popular error estimators: resubstitution, holdout, repeated holdout, simple bootstrap and 0.632 bootstrap estimators, without and with stratification. We present a general framework to analyze the decomposition of the variance of different error estimators considering the nature of the variance (irreducible/reducible variance) and the different sources of sensitivity (internal/external sensitivity). An extensive empirical study has been performed for the previously mentioned estimators with naive Bayes and C4.5 classifiers over training sets obtained from assorted probability distributions. The empirical analysis consists of decomposing the variances following the proposed framework and checking the independence assumption between the bias and the considered classifier. Based on the obtained results, we propose the most appropriate error estimations for model selection under different experimental conditions.