Brief paper: Policy iteration based feedback control

  • Authors:
  • Kan-Jian Zhang;Yan-Kai Xu;Xi Chen;Xi-Ren Cao

  • Affiliations:
  • Research Institute of Automation, Southeast University, Nanjing 210096, China;CFINS, Department of Automation, Tsinghua University, Beijing 100084, China;CFINS, Department of Automation, Tsinghua University, Beijing 100084, China;Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong

  • Venue:
  • Automatica (Journal of IFAC)
  • Year:
  • 2008

Quantified Score

Hi-index 22.15

Visualization

Abstract

It is well known that stochastic control systems can be viewed as Markov decision processes (MDPs) with continuous state spaces. In this paper, we propose to apply the policy iteration approach in MDPs to the optimal control problem of stochastic systems. We first provide an optimality equation based on performance potentials and develop a policy iteration procedure. Then we apply policy iteration to the jump linear quadratic problem and obtain the coupled Riccati equations for their optimal solutions. The approach is applicable to linear as well as nonlinear systems and can be implemented on-line on real world systems without identifying all the system structure and parameters.