Robbing the bandit: less regret in online geometric optimization against an adaptive adversary

  • Authors:
  • Varsha Dani;Thomas P. Hayes

  • Affiliations:
  • University of Chicago;University of California at Berkeley

  • Venue:
  • SODA '06 Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider "online bandit geometric optimization," a problem of iterated decision making in a largely unknown and constantly changing environment. The goal is to minimize "regret," defined as the difference between the actual loss of an online decision-making procedure and that of the best single decision in hindsight. "Geometric optimization" refers to a generalization of the well-known multi-armed bandit problem, in which the decision space is some bounded subset of Rd, the adversary is restricted to linear loss functions, and regret bounds should depend on the dimensionality d, rather than the total number of possible decisions. "Bandit" refers to the setting in which the algorithm is only told its loss on each round, rather than the entire loss function.McMahan and Blum [10] presented the best known algorithm in this setting, and proved that its expected additive regret is O(poly(d)T3/4). We simplify and improve their analysis of this algorithm to obtain regret O(poly(d)T2/3).We also prove that, for a large class of full-information online optimization problems, the optimal regret against an adaptive adversary is the same as against a non-adaptive adversary.