Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Tracking the best linear predictor
The Journal of Machine Learning Research
Prediction, Learning, and Games
Prediction, Learning, and Games
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Online Regression Competitive with Changing Predictors
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Information Consistency of Nonparametric Gaussian Process Methods
IEEE Transactions on Information Theory
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper provides a probabilistic derivation of an identity connecting the square loss of ridge regression in on-line mode with the loss of a retrospectively best regressor. Some corollaries of the identity providing upper bounds for the cumulative loss of on-line ridge regression are also discussed.