Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Simulation with Arena
Computers and Operations Research
Cooperative strategies to reduce ambulance diversion
Winter Simulation Conference
Centralized vs. Decentralized Ambulance Diversion: A Network Perspective
Management Science
Design of centralized ambulance diversion policies using simulation-optimization
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
Ambulance diversion (AD) is often used by emergency departments (EDs) to relieve congestion. When an ED is on diversion status, the ED requests ambulances to bypass the facility; therefore ambulance patients are transported to another ED. This paper studies the effect of AD policies on the average waiting time of patients. The AD policies analyzed include (i) a policy that initiates diversion when all the beds are occupied; (ii) a policy obtained by using a Markov Decision Process (MDP) formulation, and (iii) a policy that does not allow diverting at all. The analysis is based on an ED that comprises two treatment areas. The diverted patients are assumed to be transported to a neighboring ED, whose average waiting time is known. The results show significant improvement in the average waiting time spent by patients in the ED with the policy obtained by MDP formulation. In addition, other heuristics are identified to work well compared with not diverting at all.