Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Exploration-exploitation tradeoff using variance estimates in multi-armed bandits
Theoretical Computer Science
Regret Bounds and Minimax Policies under Partial Monitoring
The Journal of Machine Learning Research
Deviations of stochastic bandit regret
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Optimistic Bayesian sampling in contextual-bandit problems
The Journal of Machine Learning Research
Linear Bayesian reinforcement learning
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Relative confidence sampling for efficient on-line ranker evaluation
Proceedings of the 7th ACM international conference on Web search and data mining
Hi-index | 0.00 |
The question of the optimality of Thompson Sampling for solving the stochastic multi-armed bandit problem had been open since 1933. In this paper we answer it positively for the case of Bernoulli rewards by providing the first finite-time analysis that matches the asymptotic rate given in the Lai and Robbins lower bound for the cumulative regret. The proof is accompanied by a numerical comparison with other optimal policies, experiments that have been lacking in the literature until now for the Bernoulli case.