Processor scheduling in shared memory multiprocessors

  • Authors:
  • John Zahorjan;Cathy McCann

  • Affiliations:
  • Department of Computer Science and Engineering, University of Washington, Seattle, WA;Department of Computer Science and Engineering, University of Washington, Seattle, WA

  • Venue:
  • SIGMETRICS '90 Proceedings of the 1990 ACM SIGMETRICS conference on Measurement and modeling of computer systems
  • Year:
  • 1990

Quantified Score

Hi-index 0.01

Visualization

Abstract

Existing work indicates that the commonly used “single queue of runnable tasks” approach to scheduling shared memory multiprocessors can perform very poorly in a multiprogrammed parallel processing environment. A more promising approach is the class of “two-level schedulers” in which the operating system deals solely with allocating processors to jobs while the individual jobs themselves perform task dispatching on those processors.In this paper we compare two basic varieties of two-level schedulers. Those of the first type, static, make a single decision per job regarding the number of processors to allocate to it. Once the job has received its allocation, it is guaranteed to have exactly that number of processors available to it whenever it is active. The other class of two-level scheduler, dynamic, allows each job to acquire and release processors during its execution. By responding to the varying parallelism of the jobs, the dynamic scheduler promises higher processor utilizations at the cost of potentially greater scheduling overhead and more complicated application level task control policies.Our results, obtained via simulation, highlight the tradeoffs between the static and dynamic approaches. We investigate how the choice of policy is affected by the cost of switching a processor from one job to another. We show that for a wide range of plausible overhead values, dynamic scheduling is superior to static scheduling. Within the class of static schedulers, we show that, in most cases, a simple “run to completion” scheme is preferable to a round-robin approach. Finally, we investigate different techniques for tuning the allocation decisions required by the dynamic policies and quantify their effects on performance.We believe our results are directly applicable to many existing shared memory parallel computers, which for the most part currently employ a simple “single queue of tasks” extension of basic sequential machine schedulers. We plan to validate our results in future work through implementation and experimentation on such a system.