Automatic parallelization of APL-style programs

  • Authors:
  • Wai-Mee Ching

  • Affiliations:
  • Thomas J. Watson Research Center, P.O. Box 704, Yorkrown Heighls, NY

  • Venue:
  • APL '90 Conference proceedings on APL 90: for the future
  • Year:
  • 1990

Quantified Score

Hi-index 0.00

Visualization

Abstract

APL-style programs use high level primitives on arrays instead of DO-loops whenever possible. For such programs, the average size of a basic blocks is much large than those in their FORTRAN counterparts. Hence, it is sufficiently profitable and relative easy to concentrate on basic blocks when parallelizing APL-style programs. But such an approach must depend on an APL compiler. The APL/370 compiler we have been developing aims at implementing automatic parallelization of APL programs at basic block level.The compiler exploits functional parallelism on data independent sub-expressions and data parallelism of array primitives on array elements. The compiler front end does a local data dependency analysis and emits synchronization flags at function nodes. The back end does partitioning of (assembly code) array loop. A set of low-level synchronization primitives on MVS has also been developed. This will enable us to run compiled applications in parallel mode on IBM 3090 multi-processors to access the effectiveness of various scheduling methods on a shared memory model.