COLT '90 Proceedings of the third annual workshop on Computational learning theory
On-line learning of smooth functions of a single variable
Theoretical Computer Science
Predicting a binary sequence almost as well as the optimal biased coin
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Journal of the ACM (JACM)
Improved bounds about on-line learning of smooth-functions of a single variable
Theoretical Computer Science - Special issue on algorithmic learning theory
The weak aggregating algorithm and weak mixability
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Non-asymptotic calibration and resolution
Theoretical Computer Science
Leading strategies in competitive on-line prediction
Theoretical Computer Science
Leading strategies in competitive on-line prediction
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
The theory of prediction with expert advice usually deals with countable or finite-dimensional pools of experts. In this paper we give similar results for pools of decision rules belonging to an infinite-dimensional functional space which we call the Fermi–Sobolev space. For example, it is shown that for a wide class of loss functions (including the standard square, absolute, and log loss functions) the average loss of the master algorithm, over the first N steps, does not exceed the average loss of the best decision rule with a bounded Fermi–Sobolev norm plus O(N−−1/2). Our proof techniques are very different from the standard ones and are based on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have high resolution in the long run, we use the Expected Loss Minimization principle to find a suitable decision.