Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Introduction to Linear Optimization
Introduction to Linear Optimization
Neuro-Dynamic Programming
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Associative Reinforcement Learning using Linear Probabilistic Concepts
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Using confidence bounds for exploitation-exploration trade-offs
The Journal of Machine Learning Research
Multi-armed bandit problems with dependent arms
Proceedings of the 24th international conference on Machine learning
IEEE/ACM Transactions on Networking (TON)
Mixing bandits: a recipe for improved cold-start recommendations in a social network
Proceedings of the 7th Workshop on Social Network Mining and Analysis
Hi-index | 0.00 |
We consider bandit problems involving a large (possibly infinite) collection of arms, in which the expected reward of each arm is a linear function of an r-dimensional random vector Z ∈ Rr, where r ≥ 2. The objective is to minimize the cumulative regret and Bayes risk. When the set of arms corresponds to the unit sphere, we prove that the regret and Bayes risk is of order Θ(r √T), by establishing a lower bound for an arbitrary policy, and showing that a matching upper bound is obtained through a policy that alternates between exploration and exploitation phases. The phase-based policy is also shown to be effective if the set of arms satisfies a strong convexity condition. For the case of a general set of arms, we describe a near-optimal policy whose regret and Bayes risk admit upper bounds of the form O(r √T log3/2T).