Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Grouping Multidimensional Data: Recent Advances in Clustering
Grouping Multidimensional Data: Recent Advances in Clustering
Simulation-based Algorithms for Markov Decision Processes (Communications and Control Engineering)
Simulation-based Algorithms for Markov Decision Processes (Communications and Control Engineering)
A time aggregation approach to Markov decision processes
Automatica (Journal of IFAC)
A Markov decision process approach to multi-category patient scheduling in a diagnostic facility
Artificial Intelligence in Medicine
Lagrangian relaxation and constraint generation for allocation and advanced scheduling
Computers and Operations Research
A Lagrangian approach to dynamic resource allocation
Proceedings of the Winter Simulation Conference
Hi-index | 0.00 |
Objective: To present a decision model for elective (non-emergency) patient admissions control for distinct specialties on a periodic basis. The purpose of controlling patient admissions is to promote a more efficient utilization of hospital resources, thereby preventing idleness or excessive use of these resources, while considering their relative importance. Methods: The patient admission control is modeled as a Markov decision process. A hypothetical prototype is implemented, applying the value iteration algorithm. Results: The model is able to generate an optimal admission control policy that maintains resource consumption close to the desired levels of utilization, while optimizing the established deviation costs. Conclusion: This is a complex model due to its stochastic dynamic and dimensionality. The model has great potential for application, and requires the development of customized solution methods.