On the Choice of Smoothing Parameters for Parzen Estimators of Probability Density Functions
IEEE Transactions on Computers
A novelty detection approach to classification
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
On optimum recognition error and reject tradeoff
IEEE Transactions on Information Theory
Pattern Recognition Approaches for Classifying IP Flows
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part IV
A family of measures for best top-n class-selective decision rules
Pattern Recognition
Serial fusion of random subspace ensemble for subcellular phenotype images classification
International Journal of Bioinformatics Research and Applications
Breast cancer diagnosis from biopsy images with highly reliable random subspace classifier ensembles
Machine Vision and Applications
Random subspace support vector machine ensemble for reliable face recognition
International Journal of Biometrics
The data replication method for the classification with reject option
AI Communications
Hi-index | 0.10 |
In many classification problems objects should be rejected when the confidence in their classification is too low. An example is a face recognition problem where the faces of a selected group of people have to be classified, but where all other faces and non-faces should be rejected. These problems are typically solved by estimating the class densities and assigning an object to the class with the highest posterior probability. The total probability density is thresholded to detect the outliers. Unfortunately, this procedure does not easily allow for class-dependent thresholds, or for class models that are not based on probability densities but on distances. In this paper we propose a new heuristic to combine any type of one-class models for solving the multi-class classification problem with outlier rejection. It normalizes the average model output per class, instead of the more common non-linear transformation of the distances. It creates the possibility to adjust the rejection threshold per class, and also to combine class models that are not (all) based on probability densities and to add class models without affecting the boundaries of existing models. Experiments show that for several classification problems using class-specific models significantly improves the performance.