Affinity scheduling of unbalanced workloads

  • Authors:
  • Srikant Subramaniam;Derek L. Eager

  • Affiliations:
  • University of Saskatchewan, Saskatoon, Canada.;University of Saskatchewan, Saskatoon, Canada.

  • Venue:
  • Proceedings of the 1994 ACM/IEEE conference on Supercomputing
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Scheduling in a shared memory multiprocessor is often complicated by the fact that a unit of work may be processed more efficiently on one processor than on any other, due to factors such as the presence of required data in a local cache. The unit of work is said to have an "affinity" for the given processor, in such a case. The scheduling issue that has to be considered is the tradeoff between the goals of respecting processor affinities (so as to obtain improved efficiencies in execution) and of dynamically assigning each unit of work to whichever processor happens to be, at the time, least loaded (so as to obtain better load balance and decreased processor idle times).A specific context in which the above scheduling issue arises is that of shared memory multiprocessors with large, per-processor caches or cached main memories. The shared-memory programming paradigm of such machines permits the dynamic scheduling of work. The data required by a unit of work may, however, often reside mostly in the cache of one particular processor, to which that unit of work thus has affinity.In this paper, two new "affinity scheduling" algorithms are proposed for a context in which the units of work have widely varying execution times. An experimental study of these algorithms finds them to perform well in this context.