Bandit problems and the exploration/exploitation tradeoff

  • Authors:
  • W. G. Macready;D. H. Wolpert, II

  • Affiliations:
  • Bios Group, Santa Fe, NM;-

  • Venue:
  • IEEE Transactions on Evolutionary Computation
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

We explore the two-armed bandit with Gaussian payoffs as a theoretical model for optimization. The problem is formulated from a Bayesian perspective, and the optimal strategy for both one and two pulls is provided. We present regions of parameter space where a greedy strategy is provably optimal. We also compare the greedy and optimal strategies to one based on a genetic algorithm. In doing so, we correct a previous error in the literature concerning the Gaussian bandit problem and the supposed optimality of genetic algorithms for this problem. Finally, we provide an analytically simple bandit model that is more directly applicable to optimization theory than the traditional bandit problem and determine a near-optimal strategy for that model