Information complexity of black-box convex optimization: a new look via feedback information theory
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Smoothing Techniques for Computing Nash Equilibria of Sequential Games
Mathematics of Operations Research
Stochastic Root Finding and Efficient Estimation of Convex Risk Measures
Operations Research
The stochastic root-finding problem: Overview, solutions, and open questions
ACM Transactions on Modeling and Computer Simulation (TOMACS)
Ergodic stochastic optimization algorithms for wireless communication and networking
IEEE Transactions on Signal Processing
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
The Journal of Machine Learning Research
Stochastic computing: embracing errors in architectureand design of processors and applications
CASES '11 Proceedings of the 14th international conference on Compilers, architectures and synthesis for embedded systems
On stochastic gradient and subgradient methods with adaptive steplength sequences
Automatica (Journal of IFAC)
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Optimal distributed online prediction using mini-batches
The Journal of Machine Learning Research
Towards a unified architecture for in-RDBMS analytics
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
On software design for stochastic processors
Proceedings of the 49th Annual Design Automation Conference
NASA: achieving lower regrets and faster rates via adaptive stepsizes
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Designing Optimal Spectral Filters for Inverse Problems
SIAM Journal on Scientific Computing
Manifold identification in dual averaging for regularized stochastic online learning
The Journal of Machine Learning Research
Bayesian inference with optimal maps
Journal of Computational Physics
Separable approximate optimization of support vector machines for distributed sensing
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II
Averaging and derivative estimation within stochastic approximation algorithms
Proceedings of the Winter Simulation Conference
Structure and dynamics of information pathways in online media
Proceedings of the sixth ACM international conference on Web search and data mining
Proceedings of the Winter Simulation Conference
Online learning with multiple kernels: A review
Neural Computation
Constrained stochastic gradient descent for large-scale least squares problem
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Instant foodie: predicting expert ratings from grassroots
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
On sample size control in sample average approximations for solving smooth stochastic programs
Computational Optimization and Applications
Multidimensional stochastic approximation: Adaptive algorithms and applications
ACM Transactions on Modeling and Computer Simulation (TOMACS) - Special issue on simulation in complex service systems
A nonmonotone approximate sequence algorithm for unconstrained nonlinear optimization
Computational Optimization and Applications
Communication-efficient algorithms for statistical optimization
The Journal of Machine Learning Research
Hi-index | 0.01 |
In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.