Introduction to Stochastic Search and Optimization
Introduction to Stochastic Search and Optimization
Variable-sample methods for stochastic optimization
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Journal of Global Optimization
Convergence theory for nonconvex stochastic programming with an application to mixed logit
Mathematical Programming: Series A and B
On choosing parameters in retrospective-approximation algorithms for simulation-optimization
Proceedings of the 38th conference on Winter simulation
Efficient sample sizes in stochastic nonlinear programming
Journal of Computational and Applied Mathematics
Variable-Number Sample-Path Optimization
Mathematical Programming: Series A and B
Benchmarking Derivative-Free Optimization Algorithms
SIAM Journal on Optimization
Hi-index | 7.29 |
Minimization of unconstrained objective functions in the form of mathematical expectation is considered. The Sample Average Approximation (SAA) method transforms the expectation objective function into a real-valued deterministic function using a large sample and thus deals with deterministic function minimization. The main drawback of this approach is its cost. A large sample of the random variable that defines the expectation must be taken in order to get a reasonably good approximation and thus the sample average approximation method requires a very large number of function evaluations. We present a line search strategy that uses variable sample size and thus makes the process significantly cheaper. Two measures of progress-lack of precision and a decrease of function value are calculated at each iteration. Based on these two measures a new sample size is determined. The rule we present allows us to increase or decrease the sample size at each iteration until we reach some neighborhood of the solution. An additional safeguard check is performed to avoid unproductive sample decrease. Eventually the maximal sample size is reached so that the variable sample size strategy generates a solution of the same quality as the SAA method but with a significantly smaller number of function evaluations. The algorithm is tested on a couple of examples, including the discrete choice problem.