COLT '90 Proceedings of the third annual workshop on Computational learning theory
A game of prediction with expert advice
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Tracking the best linear predictor
The Journal of Machine Learning Research
On-line prediction with kernels and the complexity approximation principle
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Prediction, Learning, and Games
Prediction, Learning, and Games
Tracking the best hyperplane with a simple budget Perceptron
Machine Learning
Weighted Kernel Regression for Predicting Changing Dependencies
ECML '07 Proceedings of the 18th European conference on Machine Learning
IEEE Transactions on Signal Processing
An identity for kernel ridge regression
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
An identity for kernel ridge regression
Theoretical Computer Science
Hi-index | 0.00 |
This paper deals with the problem of making predictions in the online mode of learning where the dependence of the outcome yton the signal xtcan change with time. The Aggregating Algorithm (AA) is a technique that optimally merges experts from a pool, so that the resulting strategy suffers a cumulative loss that is almost as good as that of the best expert in the pool. We apply the AA to the case where the experts are all the linear predictors that can change with time. KAARCh is the kernel version of the resulting algorithm. In the kernel case, the experts are all the decision rules in some reproducing kernel Hilbert space that can change over time. We show that KAARCh suffers a cumulative square loss that is almost as good as that of any expert that does not change very rapidly.