Bootstrap Techniques for Error Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayes Error Estimation Using Parzen and k-NN Procedures
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Man-Machine Studies - Special Issue: Knowledge Acquisition for Knowledge-based Systems. Part 5
Explorations in parallel distributed processing: a handbook of models, programs, and exercises
Explorations in parallel distributed processing: a handbook of models, programs, and exercises
Extensions to the CART algorithm
International Journal of Man-Machine Studies
A Bootstrap Technique for Nearest Neighbor Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Error analysis of pattern recognition systems: the subsets bootstrap
Computer Vision and Image Understanding
Pattern Recognition Letters
An image content description technique for the inspection of specular objects
EURASIP Journal on Advances in Signal Processing
Estimating the accuracy of learned concepts
IJCAI'93 Proceedings of the 13th international joint conference on Artifical intelligence - Volume 2
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Overlap pattern synthesis with an efficient nearest neighbor classifier
Pattern Recognition
Wrapper feature selection for small sample size data driven by complete error estimates
Computer Methods and Programs in Biomedicine
Hi-index | 0.14 |
Small sample error rate estimators for nearest-neighbor classifiers are examined and contrasted with the same estimators for three-nearest-neighbor classifiers. The performance of the bootstrap estimators, e0 and 0.632B, is considered relative to leaving-one-out and other cross-validation estimators. Monte Carlo simulations are used to measure the performance of the error-rate estimators. The experimental results are compared to previously reported simulations for nearest-neighbor classifiers and alternative classifiers. It is shown that each of the estimators has strengths and weaknesses for varying apparent and true error-rate situations. A combined estimator that corrects the leaving-one-out estimator (by combining bootstrap and cross-validation estimators) gives strong results over a broad range of situations.