Predicting a binary sequence almost as well as the optimal biased coin
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Prediction, Learning, and Games
Prediction, Learning, and Games
Logarithmic regret algorithms for online convex optimization
Machine Learning
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper the problem of Prediction with Expert Advice is considered. We apply an existing algorithm, the Aggregating Algorithm, to a specific class of experts. This class of experts approximates (with respect to its parameter) the class of continuous functions and in this way it is close to a natural way of describing a possible dependence between two variables (continuous). We develop an explicit algorithm and prove an upper bound on the difference between the loss of our algorithm and the loss of the best expert, which has the order of the squared logarithm of the number of steps of the algorithm. This bound lies between existing bounds which have the form of the logarithm of the number of steps and the square root of the number of steps. Having more sets (algorithm, class of experts, upper bound) helps in choosing an appropriate way of solving the problem of Prediction with Expert Advice for a particular application.