Dynamic sample budget allocation in model-based optimization

  • Authors:
  • Jiaqiao Hu;Hyeong Soo Chang;Michael C. Fu;Steven I. Marcus

  • Affiliations:
  • Department of Applied Mathematics and Statistics, State University of New York, Stony Brook, USA 11794;Department of Computer Science and Engineering (200811037), Sogang University, Seoul, Korea;Robert H. Smith School of Business and The Institute for Systems Research, University of Maryland, College Park, USA;Department of Electrical and Computer Engineering and The Institute for Systems Research, University of Maryland, College Park, USA

  • Venue:
  • Journal of Global Optimization
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Model-based search methods are a class of optimization techniques that search the solution space by sampling from an underlying probability distribution "model," which is updated iteratively after evaluating the performance of the samples at each iteration. This paper aims to improve the sampling efficiency of model-based methods by considering a generalization where a population of distribution models is maintained and subsequently propagated from generation to generation. A key issue in the proposed approach is how to efficiently allocate the sampling budget among the population of models to maximize the algorithm performance. We formulate this problem as a generalized max k-armed bandit problem, and derive an efficient dynamic sample allocation scheme based on Markov decision theory to adaptively allocate computational resources. The proposed allocation scheme is then further used to update the current population to produce an improving population of models. Our preliminary numerical results indicate that the proposed procedure may considerably reduce the number of function evaluations needed to obtain high quality solutions, and thus further enhance the value of model-based methods for optimization problems that require expensive function evaluations for performance evaluation.