Estimation of Classifier Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
Small Sample Error Rate Estimation for k-NN Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unbiased Estimation of Ellipses by Bootstrapping
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluating the Generalization Ability of Support Vector Machines through the Bootstrap
Neural Processing Letters
Using a mixture of probabilistic decision trees for direct prediction of protein function
RECOMB '03 Proceedings of the seventh annual international conference on Research in computational molecular biology
Error analysis of pattern recognition systems: the subsets bootstrap
Computer Vision and Image Understanding
Pattern Recognition Letters
Rough-fuzzy weighted k-nearest leader classifier for large data sets
Pattern Recognition
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
A multilayer perceptron-based medical decision support system for heart disease diagnosis
Expert Systems with Applications: An International Journal
Non-parametric and region-based image fusion with Bootstrap sampling
Information Fusion
Overlap pattern synthesis with an efficient nearest neighbor classifier
Pattern Recognition
Weighted k-nearest leader classifier for large data sets
PReMI'07 Proceedings of the 2nd international conference on Pattern recognition and machine intelligence
Evaluating learning algorithms and classifiers
International Journal of Intelligent Information and Database Systems
Bootstrap resampling for image registration uncertainty estimation without ground truth
IEEE Transactions on Image Processing
Guiding the Search for Native-like Protein Conformations with an Ab-initio Tree-based Exploration
International Journal of Robotics Research
Optimal bootstrap sampling for fast image segmentation: application to retina image
ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: image and multidimensional signal processing - Volume V
Spatially variant mixtures of multiscale ARMA model for SAR imagery segmentation
AICI'11 Proceedings of the Third international conference on Artificial intelligence and computational intelligence - Volume Part II
Wrapper feature selection for small sample size data driven by complete error estimates
Computer Methods and Programs in Biomedicine
Regional models for nonlinear system identification using the self-organizing map
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Computers in Biology and Medicine
Hi-index | 0.15 |
The design of a pattern recognition system requires careful attention to error estimation. The error rate is the most important descriptor of a classifier's performance. The commonly used estimates of error rate are based on the holdout method, the resubstitution method, and the leave-one-out method. All suffer either from large bias or large variance and their sample distributions are not known. Bootstrapping refers to a class of procedures that resample given data by computer. It permits determining the statistical properties of an estimator when very little is known about the underlying distribution and no additional samples are available. Since its publication in the last decade, the bootstrap technique has been successfully applied to many statistical estimations and inference problems. However, it has not been exploited in the design of pattern recognition systems. We report results on the application of several bootstrap techniques in estimating the error rate of 1-NN and quadratic classifiers. Our experiments show that, in most cases, the confidence interval of a bootstrap estimator of classification error is smaller than that of the leave-one-out estimator. The error of 1-NN, quadratic, and Fisher classifiers are estimated for several real data sets.