A Framework for Reinforcement-Based Scheduling in Parallel Processor Systems

  • Authors:
  • Albert Y. Zomaya;Matthew Clements;Stephan Olariu

  • Affiliations:
  • -;-;-

  • Venue:
  • IEEE Transactions on Parallel and Distributed Systems
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Task scheduling is important for the proper functioning of parallel processor systems. The static scheduling of tasks onto networks of parallel processors is well-defined and documented in the literature. However, in many practical situations a priori information about the tasks that need to be scheduled is not available. In such situations, tasks usually arrive dynamically and the scheduling should be performed on-line or "on the fly." In this paper, we present a framework based on stochastic reinforcement learning, which is usually used to solve optimization problems in a simple and efficient way. The use of reinforcement learning reduces the dynamic scheduling problem to that of learning a stochastic approximation of an unknown average error surface. The main advantage of the proposed approach is that no prior information is required about the parallel processor system under consideration. The learning system develops an association between the best action (schedule) and the current state of the environment (parallel system). The performance of reinforcement learning is demonstrated by solving several dynamic scheduling problems. The conditions under which reinforcement learning can used to efficiently solve the dynamic scheduling problem are highlighted.