Algorithms for adversarial bandit problems with multiple plays

  • Authors:
  • Taishi Uchiya;Atsuyoshi Nakamura;Mineichi Kudo

  • Affiliations:
  • Graduate School of Information Science and Technology, Hokkaido University, Hokkaido, Japan;Graduate School of Information Science and Technology, Hokkaido University, Hokkaido, Japan;Graduate School of Information Science and Technology, Hokkaido University, Hokkaido, Japan

  • Venue:
  • ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Adversarial bandit problems studied by Auer et al. [4] are multi-armed bandit problems in which no stochastic assumption is made on the nature of the process generating the rewards for actions. In this paper, we extend their theories to the case where k(≥1) distinct actions are selected at each time step. As algorithms to solve our problem, we analyze an extension of Exp3 [4] and an application of a bandit online linear optimization algorithm [1] in addition to two existing algorithms (Exp3, ComBand [5]) in terms of time and space efficiency and the regret for the best fixed action set. The extension of Exp3, called Exp3.M, performs best with respect to all the measures: it runs in O(K(log k+1)) time and O(K) space, and suffers at most O(√kTK log(K/k)) regret, where K is the number of possible actions and T is the number of iterations. The upper bound of the regret we proved for Exp3.M is an extension of that proved by Auer et al. for Exp3.