An optimal error rate estimator based on average conditional error rate: Asymptotic results
Pattern Recognition Letters
Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Optimal convex error estimators for classification
Pattern Recognition
Cross-validation and bootstrapping are unreliable in small sample classification
Pattern Recognition Letters
On sample size and classification accuracy: a performance comparison
ISBMDA'05 Proceedings of the 6th International conference on Biological and Medical Data Analysis
Hi-index | 0.00 |
Estimating the classification error rate of a classifier is a key issue in machine learning. Such estimation is needed to compare classifiers or to tune the parameters of a parameterized classifier. Several methods have been proposed to estimate error rate, most of which rely on partitioning the data set or drawing bootstrap samples from it. Error estimators can suffer from bias (deviation from actual error rate) and/or variance (sensitivity to the data set). In this work, we propose an error rate estimator that estimates a generative and a posterior probability models to represent the underlying process that generates the data and exploits these models in a Monte Carlo style to provide two biased estimators whose best combination is determined by an iterative solution. We test our estimator against state of the art estimators and show that it provides a reliable estimate in terms of mean-square-error.