Global random optimization by simultaneous perturbation stochastic approximation

  • Authors:
  • John L. Maryak;Daniel C. Chin

  • Affiliations:
  • The Johns Hopkins University, Laurel, MD;The Johns Hopkins University, Laurel, MD

  • Venue:
  • Proceedings of the 33nd conference on Winter simulation
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

A desire with iterative optimization techniques is that the algorithm reach the global optimum rather than get stranded at a local optimum value. Here, we examine the global convergence properties of a "gradient free" stochastic approximation algorithm called "SPSA," that has performed well in complex optimization problems. We establish two theorems on the global convergence of SPSA. The first provides conditions under which SPSA will converge in probability to a global optimum using the well-known method of injected noise. In the second theorem, we show that, under different conditions, "basic" SPSA without injected noise can achieve convergence in probability to a global optimum. This latter result can have important benefits in the setup (tuning) and performance of the algorithm. The discussion is supported by numerical studies showing favorable comparisons of SPSA to simulated annealing and genetic algorithms.