Acceleration of stochastic approximation by averaging
SIAM Journal on Control and Optimization
Weighted Means in Stochastic Approximation of Minima
SIAM Journal on Control and Optimization
Introduction to Stochastic Search and Optimization
Introduction to Stochastic Search and Optimization
Robust Stochastic Approximation Approach to Stochastic Programming
SIAM Journal on Optimization
The stochastic root-finding problem: Overview, solutions, and open questions
ACM Transactions on Modeling and Computer Simulation (TOMACS)
An adaptive multidimensional version of the Kiefer-Wolfowitz stochastic approximation algorithm
Winter Simulation Conference
On stochastic gradient and subgradient methods with adaptive steplength sequences
Automatica (Journal of IFAC)
Root finding via darts: dynamic adaptive random target shooting
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
Stochastic Approximation (SA) is arguably the most investigated amongst algorithms for solving local continuous simulation optimization problems. Despite its enduring popularity, the prevailing opinion is that the finite-time performance of SA-type algorithms is still not robust to SA's sequence of algorithm parameters. In the last two decades, two major advances have been proposed toward alleviating this issue: (i) Polyak-Ruppert averaging where SA is executed in multiple time scales to allow for the algorithm iterates to use large (initial) step sizes for better finite time performance, without sacrificing the asymptotic convergence rate; and (ii) efficient derivative estimation to allow for better searching within the solution space. Interestingly, however, all existing literature on SA seems to treat each of these advances separately. In this article, we present two results which characterize SA's convergence rates when both (i) and (ii) are be applied simultaneously. Our results should be seen as simply providing a theoretical basis for applying ideas that seem reasonable in practice.