Asymptotic Optimality of Transductive Confidence Machine
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
A Universal Well-Calibrated Algorithm for On-line Classification
The Journal of Machine Learning Research
Online learning of conditionally I.I.D. data
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Criterion of calibration for transductive confidence machine with limited feedback
Theoretical Computer Science - Algorithmic learning theory
Well-calibrated predictions from on-line compression models
Theoretical Computer Science - Algorithmic learning theory
Using a similarity measure for credible classification
Discrete Applied Mathematics
Effective confidence region prediction using probability forecasters
AIME'05 Proceedings of the 10th conference on Artificial Intelligence in Medicine
Strangeness minimisation feature selection with confidence machines
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
Transductive Confidence Machine (TCM) and its computationally efficient modification, Inductive Confidence Machine (ICM), are ways of complementing machine-learning algorithms with practically useful measures of confidence. We show that when TCM and ICM are used in the on-line mode, their confidence measures are well-calibrated, in the sense that predictive regions at confidence level 1 - \delta will be wrong with relative frequency at most \delta (approaching \delta in the case of randomised TCM and ICM) in the long run. This is not just an asymptotic phenomenon: actually the error probability of randomised TCM and ICM is \delta at every trial and errors happen independently at different trials.