Real and complex analysis, 3rd ed.
Real and complex analysis, 3rd ed.
Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Multi-armed bandits in metric spaces
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Exploration-exploitation tradeoff using variance estimates in multi-armed bandits
Theoretical Computer Science
Characterizing truthful multi-armed bandit mechanisms: extended abstract
Proceedings of the 10th ACM conference on Electronic commerce
The price of truthfulness for pay-per-click auctions
Proceedings of the 10th ACM conference on Electronic commerce
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Thompson sampling: an asymptotically optimal finite-time analysis
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
This paper studies the deviations of the regret in a stochastic multi-armed bandit problem.When the total number of plays n is known beforehand by the agent, Audibert et al. (2009) exhibit a policy such that with probability at least 1-1/n, the regret of the policy is of order log n. They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. (2002). This work first answers an open question: it extends this negative result to any anytime policy. The second contribution of this paper is to design anytime robust policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms.