Strong points of weak convergence: a study using RPA gradient estimation for automatic learning

  • Authors:
  • Felisa J. Vázquez-Abad

  • Affiliations:
  • Department of Computer Science and Operations Research, University of Montreal, Canada and Department of Electrical and Electronic Engineering, The University of Melbourne, Australia

  • Venue:
  • Automatica (Journal of IFAC)
  • Year:
  • 1999

Quantified Score

Hi-index 22.14

Visualization

Abstract

This paper analyzes the behavior of adaptive control schemes for automatic learning. Estimates of the sensitivities are used in a gradient-based stochastic approximation procedure, in order to drive the process along the steepest descent trajectory in search for the optimum. The learning rates are kept constant for adaptability. For such procedures, convergence can be established in a weak sense. A model problem of a flexible machine is presented, for which the control parameter is a probability vector. We propose a new sensitivity estimator, generalizing the phantom rare perturbation analysis (RPA) estimator to multi-valued decisions. From the basic properties of the estimators, we build several updating rules based on the weak convergence theory to ensure asymptotic optimality. We illustrate the predicted theoretical behavior with computer simulations. Finally, we present the comparison between the behavior of our proposed scheme with a regenerative one for which we can establish strong convergence. Our results show that weak convergence yields a dramatic improvement in the rate of convergence, in addition to the capability of adaptation, or tracking.