The complexity of learning according to two models of a drifting environment
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Cross-validation for binary classification by real-valued functions: theoretical analysis
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Beating the hold-out: bounds for K-fold and progressive cross-validation
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Hi-index | 0.00 |
In this paper, we compare two well-known estimates of the generalization error : the training error and the leave-one-out error. We focuse our work on lower bounds on the performance of these estimates. Contrary to the common intuition, we show that in the worst case the leave-one-out estimate is worse than the training error.