Self-Adjusting Scheduling: An On-Line Optimization Technique for Locality Management and Load Balancing

  • Authors:
  • Babak Hamidzadeh;David J. Lilja

  • Affiliations:
  • University of Science & Technology, Hong Kong;University of Minnesota, USA

  • Venue:
  • ICPP '94 Proceedings of the 1994 International Conference on Parallel Processing - Volume 02
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Techniques for scheduling parallel tasks on to the processors of a multiprocessor architecture must tradeoff three interrelated factors: 1) scheduling and synchronization costs, 2) load balancing, and 3) memory locality. Current scheduling techniques typically consider only one or two of these three factors at a time. We propose a novel Self- Adjusting Scheduling (SAS) algorithm that addresses all three factors simultaneously. This algorithm dedicates a single processor to execute an on-line branch-and-bound algorithm to search for partial schedules concurrent with the execution of tasks previously assigned to the remaining processors. This overlapped scheduling and execution, along with self-adjustment of duration of partial scheduling periods reduces scheduling and synchronization costs significantly. To satisfy the load-balancing and locality management, SAS introduces a unified cost model that accounts for both of these factors simultaneously. We compare the simulated performance of SAS with the Affinity Scheduling algorithm (AFS). The results of our experiments demonstrate that the potential loss of performance caused by dedicating a processor to scheduling is outweighed by the higher performance produced by SAS's dynamically adjusted schedules, even in systems with a small number of processors. SAS is a general on-line optimization technique that can be applied to a variety of dynamic scheduling problems.