Compiling Functional Parallelism on Distributed-Memory Systems

  • Authors:
  • Santosh S. Pande;Dharma P. Agrawal;Jon Mauney

  • Affiliations:
  • -;-;-

  • Venue:
  • IEEE Parallel & Distributed Technology: Systems & Technology
  • Year:
  • 1994

Quantified Score

Hi-index 0.01

Visualization

Abstract

We have developed an automatic compilation method that combines data- and code-based approaches to schedule a program's functional parallelism onto distributed memory systems. Our method works with Sisal, a parallel functional language, and replaces the back end of the Optimizing Sisal Compiler so that it produces code for distributed memory systems. Our extensions allow the compiler to generate code for Intel's distributed-memory Touchstone iPSC/860 machines (Gamma, Delta, and Paragon). The modified compiler can generate a partition that minimizes program completion time (for systems with many processors) or the required number of processors (for systems with few processors). To accomplish this, we have developed a heuristic algorithm that uses the new concept of threshold to treat the problem of scheduling as a trade-off between schedule length and the number of required processors. Most compilers for distributed memory systems force the programmer to partition the data or the program code. This modified version of a Sisal compiler handles both tasks automatically in a unified framework, and lets the programmer compile for a chosen number of processors.