How does a stochastic optimization/approximation algorithm adapt to a randomly evolving optimum/root with jump Markov sample paths

  • Authors:
  • G. Yin;C. Ion;V. Krishnamurthy

  • Affiliations:
  • Wayne State University, Department of Mathematics, 48202, Detroit, MI, USA;Wayne State University, Department of Mathematics, 48202, Detroit, MI, USA;University of British Columbia, Department of Electrical Engineering, Vancouver, Canada

  • Venue:
  • Mathematical Programming: Series A and B - Series B - Special Issue: Nonsmooth Optimization and Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stochastic optimization/approximation algorithms are widely used to recursively estimate the optimum of a suitable function or its root under noisy observations when this optimum or root is a constant or evolves randomly according to slowly time-varying continuous sample paths. In comparison, this paper analyzes the asymptotic properties of stochastic optimization/approximation algorithms for recursively estimating the optimum or root when it evolves rapidly with nonsmooth (jump-changing) sample paths. The resulting problem falls into the category of regime-switching stochastic approximation algorithms with two-time scales. Motivated by emerging applications in wireless communications, and system identification, we analyze asymptotic behavior of such algorithms. Our analysis assumes that the noisy observations contain a (nonsmooth) jump process modeled by a discrete-time Markov chain whose transition frequency varies much faster than the adaptation rate of the stochastic optimization algorithm. Using stochastic averaging, we prove convergence of the algorithm. Rate of convergence of the algorithm is obtained via bounds on the estimation errors and diffusion approximations. Remarks on improving the convergence rates through iterate averaging, and limit mean dynamics represented by differential inclusions are also presented.