Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Competing with wild prediction rules
COLT'06 Proceedings of the 19th annual conference on Learning Theory
On-Line regression competitive with reproducing kernel hilbert spaces
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We consider the problem of on-line prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule in the class plus a "regret term" of O(N 驴1/2). The elements of some natural benchmark classes, however, are so irregular that these classes are not Hilbert spaces. In this paper we develop Banach-space methods to construct a prediction algorithm with a regret term of O(N 驴1/p ), where p驴[2,驴) and p驴2 reflects the degree to which the benchmark class fails to be a Hilbert space. Only the square loss function is considered.