Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Basic Ideas for Event-Based Optimization of Markov Systems
Discrete Event Dynamic Systems
Max-min optimality of service rates in queueing systems with customer-average performance criterion
Proceedings of the 40th Conference on Winter Simulation
Stochastic control via direct comparison
Discrete Event Dynamic Systems
Hi-index | 22.14 |
We consider the optimization of queueing systems with service rates depending on system states. The optimization criterion is the long-run customer-average performance, which is an important performance metric, different from the traditional time-average performance. We first establish, with perturbation analysis, a difference equation of the customer-average performance in closed networks with exponentially distributed service times and state-dependent service rates. Then we propose a policy iteration optimization algorithm based on this difference equation. This algorithm can be implemented on-line with a single sample path and does not require knowing the routing probabilities of queueing systems. Finally, we give numerical experiments which demonstrate the efficiency of our algorithm. This paper gives a new direction to efficiently optimize the ''customer-centric'' performance in queueing systems.