Orchestrating interactions among parallel computations

  • Authors:
  • Susan L. Graham;Steven Lucco;Oliver Sharp

  • Affiliations:
  • -;-;-

  • Venue:
  • PLDI '93 Proceedings of the ACM SIGPLAN 1993 conference on Programming language design and implementation
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many parallel programs contain multiple sub-computations, each with distinct communication and load balancing requirements. The traditional approach to compiling such programs is to impose a processor synchronization barrier between sub-computations, optimizing each as a separate entity. This paper develops a methodology for managing the interactions among sub-computations, avoiding strict synchronization where concurrent or pipelined relationships are possible.Our approach to compiling parallel programs has two components: symbolic data access analysis and adaptive runtime support. We summarize the data access behavior of sub-computations (such as loop nests) and split them to expose concurrency and pipelining opportunities. The split transformation has been incorporated into an extended FORTRAN compiler, which outputs a FORTRAN 77 program augmented with calls to library routines written in C and a coarse-grained dataflow graph summarizing the exposed parallelism.The compiler encodes symbolic information, including loop bounds and communication requirements, for an adaptive runtime system, which uses runtime information to improve the scheduling efficiency of irregular sub-computations. The runtime system incorporates algorithms that allocate processing resources to concurrently executing sub-computations and choose communication granularity. We have demonstrated that these dynamic techniques substantially improve performance on a range of production applications including climate modeling and x-ray tomography, expecially when large numbers of processors are available.