A Lagrangian approach to dynamic resource allocation

  • Authors:
  • Yasin Gocgun;Archis Ghate

  • Affiliations:
  • University of Washington, Seattle, WA;University of Washington, Seattle, WA

  • Venue:
  • Proceedings of the Winter Simulation Conference
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We define a class of discrete-time resource allocation problems where multiple renewable resources must be dynamically allocated to different types of jobs arriving randomly. Jobs have geometric service durations, demand resources, incur a holding cost while waiting in queue, a penalty cost of rejection when the queue is filled to capacity, and generate a reward on completion. The goal is to select which jobs to service in each time-period so as to maximize total infinite-horizon discounted expected profit. We present Markov Decision Process (MDP) models of these problems and apply a Lagrangian relaxation-based method that exploits the structure of the MDP models to approximate their optimal value functions. We then develop a dynamic programming technique to efficiently recover resource allocation decisions from this approximate value function on the fly. Numerical experiments demonstrate that these decisions outperform well-known heuristics by at least 35% but as much as 220% on an average.