COLT '90 Proceedings of the third annual workshop on Computational learning theory
Matrix computations (3rd ed.)
The Last-Step Minimax Algorithm
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
On Relative Loss Bounds in Generalized Linear Regression
FCT '99 Proceedings of the 12th International Symposium on Fundamentals of Computation Theory
Tracking the best linear predictor
The Journal of Machine Learning Research
Prediction, Learning, and Games
Prediction, Learning, and Games
Online Learning of Multiple Tasks with a Shared Loss
The Journal of Machine Learning Research
Re-adapting the regularization of weights for non-stationary regression
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Hi-index | 0.00 |
In online learning the performance of an algorithm is typically compared to the performance of a fixed function from some class, with a quantity called regret. Forster [4] proposed a last-step min-max algorithm which was simpler than the algorithm of Vovk [12], yet with the same regret. In fact the algorithm he analyzed assumed that the choices of the adversary are bounded, yielding artificially only the two extreme cases. We fix this problem by weighing the examples in such a way that the min-max problem will be well defined, and provide analysis with logarithmic regret that may have better multiplicative factor than both bounds of Forster [4] and Vovk [12]. We also derive a new bound that may be sub-logarithmic, as a recent bound of Orabona et.al [9], but may have better multiplicative factor. Finally, we analyze the algorithm in a weak-type of non-stationary setting, and show a bound that is sublinear if the non-stationarity is sub-linear as well.