Acceleration of stochastic approximation by averaging
SIAM Journal on Control and Optimization
The O.D. E. Method for Convergence of Stochastic Approximation and Reinforcement Learning
SIAM Journal on Control and Optimization
Gradient Convergence in Gradient methods with Errors
SIAM Journal on Optimization
Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints
SIAM Journal on Optimization
Decentralized Resource Allocation in Dynamic Networks of Agents
SIAM Journal on Optimization
Robust Stochastic Approximation Approach to Stochastic Programming
SIAM Journal on Optimization
Incremental Stochastic Subgradient Algorithms for Convex Optimization
SIAM Journal on Optimization
A Complementarity Framework for Forward Contracting Under Uncertainty
Operations Research
Recourse-based stochastic nonlinear programming: properties and Benders-SQP algorithms
Computational Optimization and Applications
Averaging and derivative estimation within stochastic approximation algorithms
Proceedings of the Winter Simulation Conference
Proceedings of the Winter Simulation Conference
Hi-index | 22.14 |
Traditionally, stochastic approximation (SA) schemes have been popular choices for solving stochastic optimization problems. However, the performance of standard SA implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory, that aim to overcome some of the reliance on user-specific parameters. The first scheme, referred to as a recursive steplength stochastic approximation (RSA) scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. The second scheme, termed as a cascading steplength stochastic approximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. Then, we allow for nondifferentiable objectives but with bounded subgradients over a certain domain. In such a regime, we propose a local smoothing technique, based on random local perturbations of the objective function, that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution.