Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Dynamic Control of a Queue with Adjustable Service Rate
Operations Research
Designing a Call Center with Impatient Customers
Manufacturing & Service Operations Management
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,
Optimal buffer size for a stochastic processing network in heavy traffic
Queueing Systems: Theory and Applications
Monotonicity in Markov Reward and Decision Chains: Theory and Applications
Foundations and Trends® in Stochastic Systems
Asymptotically Optimal Admission Control of a Queue with Impatient Customers
Mathematics of Operations Research
Effects of system parameters on the optimal policy structure in a class of queueing control problems
Queueing Systems: Theory and Applications
Fair Dynamic Routing in Large-Scale Heterogeneous-Server Systems
Operations Research
Overflow Networks: Approximations and Implications to Call Center Outsourcing
Operations Research
Abandonment versus blocking in many-server queues: asymptotic optimality in the QED regime
Queueing Systems: Theory and Applications
Hi-index | 0.00 |
In a M/M/N+M queue, when there are many customers waiting, it may be preferable to reject a new arrival rather than risk that arrival later abandoning without receiving service. On the other hand, rejecting new arrivals increases the percentage of time servers are idle, which also may not be desirable. We address these trade-offs by considering an admission control problem for a M/M/N+M queue when there are costs associated with customer abandonment, server idleness, and turning away customers. First, we formulate the relevant Markov decision process (MDP), show that the optimal policy is of threshold form, and provide a simple and efficient iterative algorithm that does not presuppose a bounded state space to compute the minimum infinite horizon expected average cost and associated threshold level. Under certain conditions we can guarantee that the algorithm provides an exact optimal solution when it stops; otherwise, the algorithm stops when a provided bound on the optimality gap is reached. Next, we solve the approximating diffusion control problem (DCP) that arises in the Halfin---Whitt many-server limit regime. This allows us to establish that the parameter space has a sharp division. Specifically, there is an optimal solution with a finite threshold level when the cost of an abandonment exceeds the cost of rejecting a customer; otherwise, there is an optimal solution that exercises no control. This analysis also yields a convenient analytic expression for the infinite horizon expected average cost as a function of the threshold level. Finally, we propose a policy for the original system that is based on the DCP solution, and show that this policy is asymptotically optimal. Our extensive numerical study shows that the control that arises from solving the DCP achieves a very similar cost to the control that arises from solving the MDP, even when the number of servers is small.