Brief Stochastic optimal control via Bellman's principle

  • Authors:
  • Luis G. Crespo;Jian-Qiao Sun

  • Affiliations:
  • National Institute of Aerospace, 144 Research Drive, Hampton VA 23666, USA;Department of Mechanical Engineering, University of Delaware, Newark, DE 19711, USA

  • Venue:
  • Automatica (Journal of IFAC)
  • Year:
  • 2003

Quantified Score

Hi-index 22.15

Visualization

Abstract

This paper presents a strategy for finding optimal controls of non-linear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on Bellman's principle of optimality, the cumulant neglect closure method and the short-time Gaussian approximation. Problems with state-dependent diffusion terms, non-closeable hierarchies of moment equations for the states and singular state boundary condition are considered in the examples. The uncontrolled and controlled system responses are evaluated by creating a Markov chain with a control dependent transition probability matrix via the generalized cell mapping method. In all numerical examples, excellent controlled performances were obtained.