Optimization
Stochastic decomposition: an algorithm for two-state linear programs with recourse
Mathematics of Operations Research
On the convergence of algorithms with implications for stochastic and nondifferentiable optimization
Mathematics of Operations Research
Annals of Operations Research - Special issue on sensitivity analysis and optimization of discrete event systems
Asymptotic theory for solutions in statistical estimation and stochastic programming
Mathematics of Operations Research
A simulation-based approach to two-stage stochastic programming with recourse
Mathematical Programming: Series A and B
On the Rate of Convergence of Optimal Solutions of Monte Carlo Approximations of Stochastic Programs
SIAM Journal on Optimization
Step decision rules for multistage stochastic programming: A heuristic approach
Automatica (Journal of IFAC)
On complexity of multistage stochastic programs
Operations Research Letters
Monte Carlo bounding techniques for determining solution quality in stochastic programs
Operations Research Letters
Recent Advances in Reinforcement Learning
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Approximation algorithms for 2-stage stochastic optimization problems
FSTTCS'06 Proceedings of the 26th international conference on Foundations of Software Technology and Theoretical Computer Science
Optimal distributed online prediction using mini-batches
The Journal of Machine Learning Research
Scenario Trees and Policy Selection for Multistage Stochastic Programming Using Machine Learning
INFORMS Journal on Computing
Hi-index | 22.14 |
We propose an alternative approach to stochastic programming based on Monte-Carlo sampling and stochastic gradient optimization. The procedure is by essence probabilistic and the computed solution is a random variable. We propose a solution concept in which the probability that the random algorithm produces a solution with an expected objective value departing from the optimal one by more than @e is small enough. We derive complexity bounds on the number of iterations of this process. We show that by repeating the basic process on independent samples, one can significantly reduce the number of iterations.