History-aware Self-Scheduling

  • Authors:
  • Arun Kejariwal;Alexandru Nicolau;Constantine D. Polychronopoulos

  • Affiliations:
  • University of California at Irvine, USA;University of California at Irvine, USA;University of Illinois at Urbana-Champaign, USA

  • Venue:
  • ICPP '06 Proceedings of the 2006 International Conference on Parallel Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

Scheduling parallel loops, i.e., the way iterations are mapped on to different processors, plays a critical role in the efficient execution of programs, particularly of supercomputing applications, on multiprocessor systems. In applications where the problem dimension (and hence execution time) is dependent on run-time data, loop iterations also tend to be of variable length - this variability affects both sequential and parallel loops and in particular nested loops and it is quite prevalent in sparse matrix solvers. In this paper, we propose a (execution) historyaware self-scheduling approach of irregular parallel loops on heterogeneous multiprocessor systems. First, the proposed method computes the chunk size, i.e., the amount of work allocated to a processor at each scheduling step, based on the variance in workload distribution across the iteration space. Second, it fine tunes the chunk size based on the execution history of the loop, wherein the workload of an iteration is determined at run-time based on the statistical deviation of workload estimates of previously executed iterations from their corresponding actual workloads. We evaluate our techniques using a set of kernels (extracted from industry-strength SPEC OMPM 2001 benchmark) with uneven workload distributions. The results show that our technique performs 5% - 18% better than the existing schemes.