Playing Games with Approximation Algorithms

  • Authors:
  • Sham M. Kakade;Adam Tauman Kalai;Katrina Ligett

  • Affiliations:
  • sham@tti-c.org;atk@cc.gatech.edu;katrina@cs.cmu.edu

  • Venue:
  • SIAM Journal on Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In an online linear optimization problem, on each period $t$, an online algorithm chooses $s_t\in\mathcal{S}$ from a fixed (possibly infinite) set $\mathcal{S}$ of feasible decisions. Nature (who may be adversarial) chooses a weight vector $w_t\in\mathbb{R}^n$, and the algorithm incurs cost $c(s_t,w_t)$, where $c$ is a fixed cost function that is linear in the weight vector. In the full-information setting, the vector $w_t$ is then revealed to the algorithm, and in the bandit setting, only the cost experienced, $c(s_t,w_t)$, is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed $s\in\mathcal{S}$ in hindsight. Many repeated decision-making problems with weights fit naturally into this framework, such as online shortest-path, online traveling salesman problem (TSP), online clustering, and online weighted set cover. Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online algorithm in both the full-information and the bandit settings, with average cost nearly as good as that of the best fixed $s\in\mathcal{S}$ in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio $\alpha 1$, the previous approach worked only for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an $\alpha$-approximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than $\alpha$ times that of the best $s\in\mathcal{S}$, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a “barycentric spanner” for the problem is also (provably) necessary as input. Our algorithm can also be viewed as a method for playing large repeated games, where one can compute only approximate best responses, rather than best responses.