Optimization over state feedback policies for robust control with constraints

  • Authors:
  • Paul J. Goulart;Eric C. Kerrigan;Jan M. Maciejowski

  • Affiliations:
  • Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK;Department of Aeronautics and Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2AZ, UK;Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK

  • Venue:
  • Automatica (Journal of IFAC)
  • Year:
  • 2006

Quantified Score

Hi-index 22.17

Visualization

Abstract

This paper is concerned with the optimal control of linear discrete-time systems subject to unknown but bounded state disturbances and mixed polytopic constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with knowledge of prior states is equivalent to the class of admissible feedback policies that are affine functions of the past disturbance sequence. This implies that a broad class of constrained finite horizon robust and optimal control problems, where the optimization is over affine state feedback policies, can be solved in a computationally efficient fashion using convex optimization methods. This equivalence result is used to design a robust receding horizon control (RHC) state feedback policy such that the closed-loop system is input-to-state stable (ISS) and the constraints are satisfied for all time and all allowable disturbance sequences. The cost to be minimized in the associated finite horizon optimal control problem is quadratic in the disturbance-free state and input sequences. The value of the receding horizon control law can be calculated at each sample instant using a single, tractable and convex quadratic program (QP) if the disturbance set is polytopic, or a tractable second-order cone program (SOCP) if the disturbance set is given by a 2-norm bound.