A general framework for the statistical analysis of the sources of variance for classification error estimators

  • Authors:
  • Juan D. RodríGuez;Aritz PéRez;Jose A. Lozano

  • Affiliations:
  • University of the Basque Country (UPV/EHU), Department of Computer Science and Artificial Intelligence, Facultad de informática, Paseo de Manuel Lardizábal, 1 DONOSTIA-SAN SEBASTIAN, 200 ...;University of the Basque Country (UPV/EHU), Department of Computer Science and Artificial Intelligence, Facultad de informática, Paseo de Manuel Lardizábal, 1 DONOSTIA-SAN SEBASTIAN, 200 ...;University of the Basque Country (UPV/EHU), Department of Computer Science and Artificial Intelligence, Facultad de informática, Paseo de Manuel Lardizábal, 1 DONOSTIA-SAN SEBASTIAN, 200 ...

  • Venue:
  • Pattern Recognition
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

Estimating the prediction error of classifiers induced by supervised learning algorithms is important not only to predict its future error, but also to choose a classifier from a given set (model selection). If the goal is to estimate the prediction error of a particular classifier, the desired estimator should have low bias and low variance. However, if the goal is the model selection, in order to make fair comparisons the chosen estimator should have low variance assuming that the bias term is independent from the considered classifier. This paper follows the analysis proposed in [1] about the statistical properties of k-fold cross-validation estimators and extends it to the most popular error estimators: resubstitution, holdout, repeated holdout, simple bootstrap and 0.632 bootstrap estimators, without and with stratification. We present a general framework to analyze the decomposition of the variance of different error estimators considering the nature of the variance (irreducible/reducible variance) and the different sources of sensitivity (internal/external sensitivity). An extensive empirical study has been performed for the previously mentioned estimators with naive Bayes and C4.5 classifiers over training sets obtained from assorted probability distributions. The empirical analysis consists of decomposing the variances following the proposed framework and checking the independence assumption between the bias and the considered classifier. Based on the obtained results, we propose the most appropriate error estimations for model selection under different experimental conditions.