Implementation machine paradigm for parallel programming

  • Authors:
  • Manohar Rao;Zary Segall;Dalibor Vrsalovic

  • Affiliations:
  • Department of Electrical and Computer Engineering and School of Computer Science, Carnegie Mellon University, Pittsburgh, PA;Department of Electrical and Computer Engineering and School of Computer Science, Carnegie Mellon University, Pittsburgh, PA;Department of Electrical and Computer Engineering and School of Computer Science, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • Proceedings of the 1990 ACM/IEEE conference on Supercomputing
  • Year:
  • 1990

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most supercomputers today are parallel computers. In this paper, an approach for efficiently mapping parallel applications onto parallel MIMD machine architectures is introduced. The applicability of this approach to uniform memory access multiprocessors is demonstrated. The paper shows that an intermediate layer of abstraction between the application level and the parallel architecture level is conducive to not only a better software productivity, but also to performance efficient programs. The intermediate layer consists of a set of commonly used parallel programming paradigms (implementation machines). A mathematical representation and a pragmatic representation are provided for each implementation machine (IM). The user maps the application on to one or a set of IMs and the system implements the IMs efficiently on the underlying parallel machine.To illustrate the power of the above concept two different ways of parallelizing a molecular dynamics algorithm (master-slave and pipeline) on a shared memory multiprocessor are presented. Mathematical models and pragmatic representations are developed for both these implementation machines. The utility of the IM approach to parallel programming is demonstrated. Students in a graduate course on parallel processing were asked to write parallel programs using the IM approach. The results of this experiment indicate that the IM approach indeed leads to performance efficient parallel programs.