The Nonstochastic Multiarmed Bandit Problem
SIAM Journal on Computing
Path kernels and multiplicative updates
The Journal of Machine Learning Research
Improvements to the Linear Programming Based Scheduling of Web Advertisements
Electronic Commerce Research
Dependent rounding and its applications to approximation algorithms
Journal of the ACM (JACM)
The On-Line Shortest Path Problem Under Partial Monitoring
The Journal of Machine Learning Research
Ranked bandits in metric spaces: learning diverse rankings over large document collections
The Journal of Machine Learning Research
Hi-index | 0.00 |
Adversarial bandit problems studied by Auer et al. [4] are multi-armed bandit problems in which no stochastic assumption is made on the nature of the process generating the rewards for actions. In this paper, we extend their theories to the case where k(≥1) distinct actions are selected at each time step. As algorithms to solve our problem, we analyze an extension of Exp3 [4] and an application of a bandit online linear optimization algorithm [1] in addition to two existing algorithms (Exp3, ComBand [5]) in terms of time and space efficiency and the regret for the best fixed action set. The extension of Exp3, called Exp3.M, performs best with respect to all the measures: it runs in O(K(log k+1)) time and O(K) space, and suffers at most O(√kTK log(K/k)) regret, where K is the number of possible actions and T is the number of iterations. The upper bound of the regret we proved for Exp3.M is an extension of that proved by Auer et al. for Exp3.