Potential-Based Algorithms in On-Line Prediction and Game Theory
Machine Learning
PAC Bounds for Multi-armed Bandit and Markov Decision Processes
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
The Sample Complexity of Exploration in the Multi-Armed Bandit Problem
The Journal of Machine Learning Research
Robust, risk-sensitive, and data-driven control of markov decision processes
Robust, risk-sensitive, and data-driven control of markov decision processes
Exploration-exploitation tradeoff using variance estimates in multi-armed bandits
Theoretical Computer Science
Risk-Sensitive and Risk-Neutral Multiarmed Bandits
Mathematics of Operations Research
Pure exploration in finitely-armed and continuous-armed bandits
Theoretical Computer Science
Large deviations bounds for estimating conditional value-at-risk
Operations Research Letters
Hi-index | 0.00 |
We consider stochastic multiarmed bandit problems where each arm generates i.i.d. rewards according to an unknown distribution. Whereas classical bandit solutions only maximize the expected reward, we consider the problem of minimizing risk using notions such as the value-at-risk, the average value-at-risk, and the mean-variance risk. We present algorithms to minimize the risk over a single and multiple time periods, along with PAC accuracy guarantees given a finite number of reward samples. In the single-period case, we show that finding the arm with least risk requires not many more samples than the arm with highest expected reward. Although minimizing the multiperiod value-at-risk is known to be hard, we present an algorithm with comparable sample complexity under additional assumptions.