Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Semi-Supervised Self-Training of Object Detection Models
WACV-MOTION '05 Proceedings of the Seventh IEEE Workshops on Application of Computer Vision (WACV/MOTION'05) - Volume 1 - Volume 01
Evaluating Classification Reliability for Combining Classifiers
ICIAP '07 Proceedings of the 14th International Conference on Image Analysis and Processing
Using co-training and self-training in semi-supervised multiple classifier systems
SSPR'06/SPR'06 Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Hi-index | 0.00 |
In self-training methods, unlabeled samples are first assigned a provisional label by the classifier, and then used to extend the training set of the classifier itself. For this latter step it is important to choose only the samples whose classification is likely to be correct, according to a suitably defined reliability measure. In this paper we want to study to what extent the choice of a particular technique for evaluating the classification reliability can affect the learning performance. To this aim, we have compared five different reliability evaluators on four publicly available datasets, analyzing and discussing the obtained results.