Combining gradient-based optimization with stochastic search

  • Authors:
  • Enlu Zhou;Jiaqiao Hu

  • Affiliations:
  • University of Illinois, Urbana-Champaign;State University of New York, Stony Brook, NY

  • Venue:
  • Proceedings of the Winter Simulation Conference
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a stochastic search algorithm for solving non-differentiable optimization problems. At each iteration, the algorithm searches the solution space by generating a population of candidate solutions from a parameterized sampling distribution. The basic idea is to convert the original optimization problem into a differentiable problem in terms of the parameters of the sampling distribution, and then use a quasi-Newton-like method on the reformulated problem to find improved sampling distributions. The algorithm combines the strength of stochastic search from considering a population of candidate solutions to explore the solution space with the rapid convergence behavior of gradient methods by exploiting local differentiable structures. We provide numerical examples to illustrate its performance.