Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Character recognition—a review
Pattern Recognition
Handwritten digit recognition with a back-propagation network
Advances in neural information processing systems 2
Neural Computation
Democracy in neural nets: voting schemes for classification
Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Novelty detection: a review—part 1: statistical approaches
Signal Processing
An Approach to Novelty Detection Applied to the Classification of Image Regions
IEEE Transactions on Knowledge and Data Engineering
Evolving ensemble of classifiers in random subspace
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Pairwise fusion matrix for combining classifiers
Pattern Recognition
Classification with a Reject Option using a Hinge Loss
The Journal of Machine Learning Research
Applying pairwise fusion matrix on fusion functions for classifier combination
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
Dynamic selection of ensembles of classifiers using contextual information
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Object detection in video using Lorenz information measure and discrete wavelet transform
Proceedings of the International Conference on Advances in Computing, Communications and Informatics
Novelty detection using a new group outlier factor
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Hi-index | 0.00 |
We investigate the error versus reject tradeoff for classifiers. Our analysis is motivated by the remarkable similarity in error-reject tradeoff curves for widely differing algorithms classifying handwritten characters. We present the data in a new scaled version that makes this universal character particularly evident. Based on Chow‘s theory of the error-reject tradeoff and its underlying Bayesian analysis we argue that such universality is in fact to be expected for general classification problems. Furthermore, we extend Chow‘s theory to classifiers working from finite samples on a broad, albeit limited, class of problems. The problems we consider are effectively binary, i.e., classification problems for which almost all inputs involve a choice between the right classification and at most one predominant alternative. We show that for such problems at most half of the initially rejected inputs would have been erroneously classified. We show further that such problems arise naturally as small perturbations of the PAC model for large training sets. The perturbed model leads us to conclude that the dominant source of error comes from pairwise overlapping categories. For infinite training sets, the overlap is due to noise and/or poor preprocessing. For finite training sets there is an additional contribution from the inevitable displacement of the decision boundaries due to finiteness of the sample. In either case, a rejection mechanism which rejects inputs in a shell surrounding the decision boundaries leads to a universal form for the error-reject tradeoff. Finally, we analyze a specific reject mechanism based on the extent of consensus among an ensemble of classifiers. For the ensemble reject mechanism we find an analytic expression for the error-reject tradeoff based on a maximum entropy estimate of the problem difficulty distribution.