Averaging and derivative estimation within stochastic approximation algorithms

  • Authors:
  • Fatemeh Sadat Hashemi;Raghu Pasupathy

  • Affiliations:
  • Virginia Tech, Blacksburg, VA;Virginia Tech, Blacksburg, VA

  • Venue:
  • Proceedings of the Winter Simulation Conference
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stochastic Approximation (SA) is arguably the most investigated amongst algorithms for solving local continuous simulation optimization problems. Despite its enduring popularity, the prevailing opinion is that the finite-time performance of SA-type algorithms is still not robust to SA's sequence of algorithm parameters. In the last two decades, two major advances have been proposed toward alleviating this issue: (i) Polyak-Ruppert averaging where SA is executed in multiple time scales to allow for the algorithm iterates to use large (initial) step sizes for better finite time performance, without sacrificing the asymptotic convergence rate; and (ii) efficient derivative estimation to allow for better searching within the solution space. Interestingly, however, all existing literature on SA seems to treat each of these advances separately. In this article, we present two results which characterize SA's convergence rates when both (i) and (ii) are be applied simultaneously. Our results should be seen as simply providing a theoretical basis for applying ideas that seem reasonable in practice.