Bootstrap confidence intervals for ratios of expectations
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Proceedings of the 32nd conference on Winter simulation
Combining the Stochastic Counterpart and Stochastic ApproximationMethods
Discrete Event Dynamic Systems
Discrete Event Dynamic Systems
Journal of Global Optimization
Universal parameter optimisation in games based on SPSA
Machine Learning
A comparative study of genetic algorithm components in simulation-based optimisation
Proceedings of the 40th Conference on Winter Simulation
RSPSA: enhanced parameter optimization in games
ACG'05 Proceedings of the 11th international conference on Advances in Computer Games
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Multidimensional stochastic approximation: Adaptive algorithms and applications
ACM Transactions on Modeling and Computer Simulation (TOMACS) - Special issue on simulation in complex service systems
Computers and Operations Research
Hi-index | 0.00 |
Convergence rate results are derived for a stochastic optimization problem where a performance measure is minimized with respect to a vector parameter t. Assuming that a gradient estimator is available and that both the bias and the variance of the estimator are (known) functions of the budget devoted to its computation, the gradient estimator is employed in conjunction with a stochastic approximation (SA) algorithm. Our interest is to figure out how to allocate the total available computational budget to the successive SA iterations. The effort is devoted to solving the asymptotic version of this problem by finding the convergence rate of SA toward the optimizer, first as a function of the number of iterations and then as a function of the total computational effort. As a result the optimal rate of increase of the computational budget per iteration can be found. Explicit expressions for the case where the computational budget devoted to an iteration is a polynomial in the iteration number, and where the bias and variance of the gradient estimator are polynomials of the computational budget, are derived. Applications include the optimization of steady-state simulation models with likelihood ratio, perturbation analysis, or finite-difference gradient estimators; optimization of infinite-horizon models with discounting; optimization of functions of several expectations; and so on. Several examples are discussed. Our results readily generalize to general root-finding problems.