COLT '90 Proceedings of the third annual workshop on Computational learning theory
The weighted majority algorithm
Information and Computation
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Prediction, Learning, and Games
Prediction, Learning, and Games
Learning probabilistic prediction functions
SFCS '88 Proceedings of the 29th Annual Symposium on Foundations of Computer Science
On-Line regression competitive with reproducing kernel hilbert spaces
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
The weak aggregating algorithm and weak mixability
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
Leading strategies in competitive on-line prediction
Theoretical Computer Science
Hi-index | 0.00 |
This paper introduces the class of stationary prediction strategies and constructs a prediction algorithm that asymptotically performs as well as the best continuous stationary strategy. We make mild compactness assumptions but no stochastic assumptions about the environment. In particular, no assumption of stationarity is made about the environment, and the stationarity of the considered strategies only means that they do not depend explicitly on time; it is natural to consider only stationary strategies for many non-stationary environments.