The nature of statistical learning theory
The nature of statistical learning theory
Ridge Regression Confidence Machine
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Machine-Learning Applications of Algorithmic Randomness
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Transduction with confidence and credibility
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Transductive Confidence Machines for Pattern Recognition
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Inductive Confidence Machines for Regression
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Asymptotic Optimality of Transductive Confidence Machine
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Off-Line Learning with Transductive Confidence Machines: An Empirical Evaluation
MLDM '07 Proceedings of the 5th international conference on Machine Learning and Data Mining in Pattern Recognition
Topology preserving SOM with transductive confidence machine
DS'10 Proceedings of the 13th international conference on Discovery science
Regression conformal prediction with nearest neighbours
Journal of Artificial Intelligence Research
Mining tolerance regions with model trees
ISMIS'06 Proceedings of the 16th international conference on Foundations of Intelligent Systems
Hi-index | 0.00 |
When correct priors are known, Bayesian algorithms give optimal decisions, and accurate confidence values for predictions can be obtained. If the prior is incorrect however, these confidence values have no theoretical base - even though the algorithms' predictive performance may be good. There also exist many successful learning algorithms which only depend on the iid assumption. Often however they produce no confidence values for their predictions. Bayesian frameworks are often applied to these algorithms in order to obtain such values, however they can rely on unjustified priors. In this paper we outline the typicalness framework which can be used in conjunction with many other machine learning algorithms. The framework provides confidence information based only on the standard iid assumption and so is much more robust to different underlying data distributions. We show how the framework can be applied to existing algorithms. We also present experimental results which show that the typicalness approach performs close to Bayes when the prior is known to be correct. Unlike Bayes however, the method still gives accurate confidence values even when different data distributions are considered.