An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Transductive Confidence Machines for Pattern Recognition
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Inductive Confidence Machines for Regression
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Ridge Regression Confidence Machine
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Algorithmic Learning in a Random World
Algorithmic Learning in a Random World
Hedging Predictions in Machine Learning
The Computer Journal
Conformal Prediction with Neural Networks
ICTAI '07 Proceedings of the 19th IEEE International Conference on Tools with Artificial Intelligence - Volume 02
Transduction with confidence and credibility
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
Regression conformal prediction with nearest neighbours
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
In this paper we apply Conformal Prediction (CP) to the k-Nearest Neighbours Regression (k-NNR) algorithm and propose a way of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. When the regular regression nonconformity measure is used the resulting predictive regions have more or less the same width for all examples in the test set. However, it would be more natural for the size of the regions to vary according to how difficult to predict each example is. We define two new nonconformity measures, which produce predictive regions of variable width depending on the expected accuracy of the algorithm on each example. As a consequence, the resulting predictive regions are in most cases much tighter than those produced by the simple regression measure.