Overlapping communication and computation with OpenMP and MPI

  • Authors:
  • Timothy H. Kaiser;Scott B. Baden

  • Affiliations:
  • University of California, San Diego, San Diego Supercomputer Center, MC 0505, 9500 Gilman Drive, La Jolla, CA 92093-0505, USA. Tel.: +1 858 534 5157/ Fax: +1 858 534 5117/ E-mail: tkaiser@sdsc.edu;Computer Science and Engineering Department, University of California, San Diego, 9500 Gilman Drive, Mail Stop 0114, La Jolla, CA 92093-0114, USA. Tel.: +1 858 534 8861/ Fax: +1 858 534 7029/ E-ma ...

  • Venue:
  • Scientific Programming
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Machines comprised of a distributed collection of shared memory or SMP nodes are becoming common for parallel computing. OpenMP can be combined with MPI on many such machines. Motivations for combing OpenMP and MPI are discussed. While OpenMP is typically used for exploiting loop-level parallelism it can also be used to enable coarse grain parallelism, potentially leading to less overhead. We show how coarse grain OpenMP parallelism can also be used to facilitate overlapping MPI communication and computation for stencil-based grid programs such as a program performing Gauss-Seidel iteration with red-black ordering. Spatial subdivision or domain decomposition is used to assign a portion of the grid to each thread. One thread is assigned a null calculation region so it was free to perform communication. Example calculations were run on an IBM SP using both the Kuck & Associates and IBM compilers.