Sequential Sampling to Myopically Maximize the Expected Value of Information

  • Authors:
  • Stephen E. Chick;Jürgen Branke;Christian Schmidt

  • Affiliations:
  • INSEAD, Technology and Operations Management Area, F-77305 Fontainebleau, France;Institute AIFB, University of Karlsruhe (TH), D-76128 Karlsruhe, Germany;Locom Software GmbH, D-76131 Karlsruhe, Germany

  • Venue:
  • INFORMS Journal on Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Statistical selection procedures are used to select the best of a finite set of alternatives, where “best” is defined in terms of each alternative's unknown expected value, and the expected values are inferred through statistical sampling. One effective approach, which is based on a Bayesian probability model for the unknown mean performance of each alternative, allocates samples based on maximizing an approximation to the expected value of information (EVI) from those samples. The approximations include asymptotic and probabilistic approximations. This paper derives sampling allocations that avoid most of those approximations to the EVI but entails sequential myopic sampling from a single alternative per stage of sampling. We demonstrate empirically that the benefits of reducing the number of approximations in the previous algorithms are typically outweighed by the deleterious effects of a sequential one-step myopic allocation when more than a few dozen samples are allocated. Theory clarifies the derivation of selection procedures that are based on the EVI.