A reinforcement learning framework for utility-based scheduling in resource-constrained systems

  • Authors:
  • David Vengerov

  • Affiliations:
  • Sun Microsystems Laboratories, Menlo Park, CA

  • Venue:
  • A reinforcement learning framework for utility-based scheduling in resource-constrained systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a general methodology for scheduling jobs in soft real-time systems, where the utility of completing each job decreases over time. This scheduling problem is known to be NP-hard, requiring a heuristic solution to operate in real-time. We present a utility-based framework for making repeated scheduling decisions based on dynamically observed information about unscheduled jobs and system's resources. This framework generalizes the standard scheduling problem to a resource-constrained environment, where resource allocation (RA) decisions (how many CPUs to allocate to each job) have to be made concurrently with the scheduling decisions (when to execute each job). We then use the discrete-time Optimal Control theory to formulate the optimization problem of finding the scheduling/RA policy that maximizes the average utility per time step obtained from completed jobs. We propose a Reinforcement Learning (RL) architecture for solving the NP-hard Optimal Control problem in real-time, and our experimental results demonstrate the feasibility and benefits of the proposed approach.