Exploiting Reliability for Dynamic Selection of Classifiers by Means of Genetic Algorithms
ICDAR '03 Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2
Novelty detection: a review—part 1: statistical approaches
Signal Processing
Novelty detection: a review—part 2: neural network based approaches
Signal Processing
Computers in Biology and Medicine
Pattern Recognition Approaches for Classifying IP Flows
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Cancer informatics by prototype networks in mass spectrometry
Artificial Intelligence in Medicine
ACM Computing Surveys (CSUR)
Data & Knowledge Engineering
Aggregation of classifiers for staining pattern recognition in antinuclear autoantibodies analysis
IEEE Transactions on Information Technology in Biomedicine
A rejection option for the multilayer perceptron using hyperplanes
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part I
A family of measures for best top-n class-selective decision rules
Pattern Recognition
Shaping the error-reject curve of error correcting output coding systems
ICIAP'11 Proceedings of the 16th international conference on Image analysis and processing: Part I
Design of reject rules for ECOC classification systems
Pattern Recognition
Multiple classifier combination using reject options and markov fusion networks
Proceedings of the 14th ACM international conference on Multimodal interaction
Hi-index | 0.00 |
A method defining a reject option that is applicable to a given 0-reject classifier is proposed. The reject option is based on an estimate of the classification reliability, measured by a reliability evaluator Ψ. Trivially, once a reject threshold σ has been fixed, a sample is rejected if the corresponding value of Ψ is below σ. Obviously, as σ represents the least tolerable classification reliability level, when its value varies the reject option becomes more or less severe. In order to adapt the behavior of the reject option to the requirements of the considered application domain, a function P characterizing the reject option's adequacy to the domain has been introduced. It is shown that P can be expressed as a function of σ and, consequently, the optimal value for σ is defined as the one which maximizes the function P. The method for determining the optimal threshold value is independent of the specific 0-reject classifier, while the definition of the reliability evaluators is related to the classifier's architecture. General criteria for defining appropriate reliability evaluators within a classification paradigm are illustrated in the paper and are based on the localization, in the feature space, of the samples that could be classified with a low reliability. The definition of the reliability evaluators for three popular architectures of neural networks (backpropagation, learning vector quantization and probabilistic network) is presented. Finally, the method has been tested with reference to a complex classification problem with data generated according to a distribution-of-distributions model