Stability of Stochastic Approximation under Verifiable Conditions

  • Authors:
  • Christophe Andrieu;Éric Moulines;Pierre Priouret

  • Affiliations:
  • -;-;-

  • Venue:
  • SIAM Journal on Control and Optimization
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper we address the problem of the stability and convergence of the stochastic approximation procedure \[ \theta_{n+1} = \theta_n + \gamma_{n+1} [h(\theta_n)+\xi_{n+1}]. \] The stability of such sequences $\{\theta_n\}$ is known to heavily rely on the behavior of the mean field $h$ at the boundary of the parameter set and the magnitude of the stepsizes used. The conditions typically required to ensure convergence, and in particular the boundedness or stability of $\{ \theta_n \}$, are either too difficult to check in practice or not satisfied at all. This is the case even for very simple models. The most popular technique for circumventing the stability problem consists of constraining $\{ \theta_n \}$ to a compact subset ${\mathcal{K}}$ in the parameter space. This is obviously not a satisfactory solution, as the choice of ${\mathcal{K}}$ is a delicate one. In this paper we first prove a ``deterministic'' stability result, which relies on simple conditions on the sequences $\{ \xi_n \}$ and $ \{ \gamma_n \}$. We then propose and analyze an algorithm based on projections on adaptive truncation sets, which ensures that the aforementioned conditions required for stability are satisfied. We focus in particular on the case where $\{ \xi_n \}$ is a so-called Markov state-dependent noise. We establish both the stability and convergence with probability 1 (w. p. 1) of the algorithm under a set of simple and verifiable assumptions. We illustrate our results with an example related to adaptive Markov chain Monte Carlo algorithms.