Exploiting Distributed-Memory and Shared-Memory Parallelism on Clusters of SMPs with Data Parallel Programs

  • Authors:
  • Siegfried Benkner;Viera Sipkova

  • Affiliations:
  • Institute for Software Science, University of Vienna, Liechtensteinstrasse 22, A-1090 Vienna, Austria. sigi@par.univie.ac.at;Institute for Software Science, University of Vienna, Liechtensteinstrasse 22, A-1090 Vienna, Austria. sipka@par.univie.ac.at

  • Venue:
  • International Journal of Parallel Programming
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Clusters of SMPs are hybrid-parallel architectures that combine the main concepts of distributed-memory and shared-memory parallel machines. Although SMP clusters are widely used in the high performance computing community, there exists no single programming paradigm that allows exploiting the hierarchical structure of these machines. Most parallel applications deployed on SMP clusters are based on MPI, the standard API for distributed-memory parallel programming, and thus may miss a number of optimization opportunities offered by the shared memory available within SMP nodes. In this paper we present extensions to the data parallel programming language HPF and associated compilation techniques for optimizing HPF programs on clusters of SMPs. The proposed extensions enable programmers to control key aspects of distributed-memory and shared-memory parallelization at a high-level of abstraction. Based on these language extensions, a compiler can adopt a hybrid parallelization strategy which closely reflects the hierarchical structure of SMP clusters by automatically exploiting shared-memory parallelism based on OpenMP within cluster nodes and distributed-memory parallelism utilizing MPI across nodes. We describe the implementation of these features in the VFC compiler and present experimental results which show the effectiveness of these techniques.