Singularly Perturbed Discounted Markov Control Processes in a General State Space

  • Authors:
  • O. L. V. Costa;F. Dufour

  • Affiliations:
  • oswaldo@lac.usp.br;dufour@math.u-bordeaux1.fr

  • Venue:
  • SIAM Journal on Control and Optimization
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper studies the asymptotic optimality of discrete-time Markov decision processes (MDPs) with general state space and action space and having weak and strong interactions. By using a similar approach as developed by Liu, Zhang, and Yin [Appl. Math. Optim., 44 (2001), pp. 105-129], the idea in this paper is to consider an MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter $\epsilon 0$ in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as $\epsilon$ goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.