Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Estimating the Patient's Price of Privacy in Liver Transplantation
Operations Research
Assessing Dynamic Breast Cancer Screening Policies
Operations Research
Optimal Initiation and Management of Dialysis Therapy
Operations Research
OR Forum---A POMDP Approach to Personalize Mammography Screening Decisions
Operations Research
Hi-index | 0.00 |
We develop a finite-horizon discrete-time constrained Markov decision process (MDP) to model diagnostic decisions after mammography where we maximize the total expected quality-adjusted life years (QALYs) of a patient under resource constraints. We use clinical data to estimate the parameters of the MDP model and solve it as a mixed-integer program. By repeating optimization for a sequence of budget levels, we calculate incremental cost-effectiveness ratios attributable to consecutive levels of funding and compare actual clinical practice with optimal decisions. We prove that the optimal value function is concave in the allocated budget. Comparing to actual clinical practice, using optimal thresholds for decision making may result in approximately 22% cost savings without sacrificing QALYs. Our analysis indicates short-term follow-ups are the immediate target for elimination when budget becomes a concern. Policy change is more drastic in the older age group with the increasing budget, yet the gains in total expected QALYs related to larger budgets are predominantly seen in younger women along with modest gains for older women.