The Continuum-Armed Bandit Problem
SIAM Journal on Control and Optimization
The Nonstochastic Multiarmed Bandit Problem
SIAM Journal on Computing
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Prediction, Learning, and Games
Prediction, Learning, and Games
Combining online and offline knowledge in UCT
Proceedings of the 24th international conference on Machine learning
Multi-armed bandits in metric spaces
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Simulation-based approach to general game playing
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Achieving master level play in 9×9 computer go
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Improved rates for the stochastic continuum-armed bandit problem
COLT'07 Proceedings of the 20th annual conference on Learning theory
Pure exploration in finitely-armed and continuous-armed bandits
Theoretical Computer Science
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Lipschitz bandits without the Lipschitz constant
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II
Confidence bounds for statistical model checking of probabilistic hybrid systems
FORMATS'12 Proceedings of the 10th international conference on Formal Modeling and Analysis of Timed Systems
Mixing bandits: a recipe for improved cold-start recommendations in a social network
Proceedings of the 7th Workshop on Social Network Mining and Analysis
Ranked bandits in metric spaces: learning diverse rankings over large document collections
The Journal of Machine Learning Research
Relative confidence sampling for efficient on-line ranker evaluation
Proceedings of the 7th ACM international conference on Web search and data mining
Hi-index | 0.00 |
We consider a generalization of stochastic bandits where the set of arms, X, is allowed to be a generic measurable space and the mean-payoff function is "locally Lipschitz" with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm selection policy, called HOO (hierarchical optimistic optimization), with improved regret bounds compared to previous results for a large class of problems. In particular, our results imply that if X is the unit hypercube in a Euclidean space and the mean-payoff function has a finite number of global maxima around which the behavior of the function is locally continuous with a known smoothness degree, then the expected regret of HOO is bounded up to a logarithmic factor by √n, that is, the rate of growth of the regret is independent of the dimension of the space. We also prove the minimax optimality of our algorithm when the dissimilarity is a metric. Our basic strategy has quadratic computational complexity as a function of the number of time steps and does not rely on the doubling trick. We also introduce a modified strategy, which relies on the doubling trick but runs in linearithmic time. Both results are improvements with respect to previous approaches.