Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
On-Line regression competitive with reproducing kernel hilbert spaces
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
On the generalization ability of on-line learning algorithms
IEEE Transactions on Information Theory
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
Competing with wild prediction rules
Machine Learning
Leading strategies in competitive on-line prediction
Theoretical Computer Science
Leading strategies in competitive on-line prediction
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
We consider the problem of on-line prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule in the class plus a “regret term” of O(N−1/2). The elements of some natural benchmark classes, however, are so irregular that these classes are not Hilbert spaces. In this paper we develop Banach-space methods to construct a prediction algorithm with a regret term of O(N$^{\rm -1/{\it p}}$), where p∈(2,∞) and p–2 reflects the degree to which the benchmark class fails to be a Hilbert space.