Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
From Perturbation Analysis to Markov Decision Processes and Reinforcement Learning
Discrete Event Dynamic Systems
Introduction to Discrete Event Systems
Introduction to Discrete Event Systems
Policy iteration for customer-average performance optimization of closed queueing systems
Automatica (Journal of IFAC)
Hi-index | 0.00 |
In this paper, we consider the optimization of service rates in queueing systems, especially in closed Jackson networks. The optimization criterion is the customer-average performance, which is another important performance metric compared with the traditional time-average performance. Based on the methodology of perturbation analysis, we can derive a performance difference equation when the service rates are changed. With this difference equation, we find the optimal service rates have a Max-Min property, i.e., the optimal service rates can be chosen from its maximal or minimal value. This property can reduce the complexity of this type of optimization problems. Moreover, we also prove the max-min optimality is valid for both state-dependent service rates and load-dependent service rates in queueing systems.