Stochastic dynamic programming and the control of queueing systems
Stochastic dynamic programming and the control of queueing systems
Dynamic Programming
Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach
Mathematics of Operations Research
Optimal Control of Ergodic Continuous-Time Markov Chains with Average Sample-Path Rewards
SIAM Journal on Control and Optimization
Mathematics of Operations Research
SIAM Journal on Control and Optimization
Continuous-Time Markov Decision Processes with State-Dependent Discount Factors
Acta Applicandae Mathematicae: an international survey journal on applying mathematics and mathematical applications
Hi-index | 0.00 |
This paper deals with continuous-time Markov decision processes in Polish spaces, under an expected discounted reward criterion. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. We first give conditions on the controlled system's primitive data. Under these conditions we prove that the transition functions of possibly nonhomogeneous continuous-time Markov processes are regular by using Feller's construction approach to such transition functions. Then, under additional continuity and compactness conditions, we ensure the existence of optimal stationary policies by using the technique of extended infinitesimal operators associated with the transition functions, and also provide a recursive way to compute (or at least to approximate) the optimal reward values. Finally, we use examples to illustrate our results and the gap between our conditions and those in the previous literature.