Accelerating the convergence of random search methods for discrete stochastic optimization

  • Authors:
  • Sigrún Andradóttir

  • Affiliations:
  • Georgia Institute of Technology, Atlanta

  • Venue:
  • ACM Transactions on Modeling and Computer Simulation (TOMACS)
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

We discuss the choice of the estimation of the optimal solution when random search methods are applied to solve discrete stochastic optimization problems. At the present time, such optimization methods usually estimate the optimal solution using either the feasible solution the method is currently exploring or the feasible solution visited most often so far by the method. We propose using all the observed objective function values generated as the random search method moves around the feasible region seeking an optimal solution to obtain increasingly more precise estimates of the objective function values at the different points in the feasible region. At any given time, the feasible solution that has the best estimated objective function value (largest one for maximization problems; the smallest one for minimization problems) is used as the estimate of the optimal solution. We discuss the advantages of using this approach for estimating the optimal solution and present numerical results showing that modifying an existing random search method to use tnhis approach for estimating the optimal soluation appears to yield improved performance. We also present sereval rate of convergence results for random search methods using our approach for estimating the optimal solution. One these random search methods is a new variant of the stochastic comparison method; in addition to specifying the rate of convergence of this method, we prove that it is guaranteed to converge almost surely to the set of global optimal solutions and present a result that demonstrates that this method is likely to perform well in practice.