Fine-Grained Multithreading Support for Hybrid Threaded MPI Programming

  • Authors:
  • Pavan Balaji;Darius Buntinas;David Goodell;William Gropp;Rajeev Thakur

  • Affiliations:
  • MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONALLABORATORY, ARGONNE, IL 60439, USA;MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONALLABORATORY, ARGONNE, IL 60439, USA.;MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONALLABORATORY, ARGONNE, IL 60439, USA.;DEPARTMENT OF COMPUTER SCIENCE, UNIVERSITY OF ILLINOIS,URBANA, IL 61801, USA;MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONALLABORATORY, ARGONNE, IL 60439, USA.

  • Venue:
  • International Journal of High Performance Computing Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

As high-end computing systems continue to grow in scale, recent advances in multi- and many-core architectures have pushed such growth toward more dense architectures, that is, more processing elements per physical node, rather than more physical nodes themselves. Although a large number of scientific applications have relied so far on an MPI-everywhere model for programming high-end parallel systems; this model may not be sufficient for future machines, given their physical constraints such as decreasing amounts of memory per processing element and shared caches. As a result, application and computer scientists are exploring alternative programming models that involve using MPI between address spaces and some other threaded model, such as OpenMP, Pthreads, or Intel TBB, within an address space. Such hybrid models require efficient support from an MPI implementation for MPI messages sent from multiple threads simultaneously. In this paper, we explore the issues involved in designing such an implementation. We present four approaches to building a fully thread-safe MPI implementation, with decreasing levels of critical-section granularity (from coarse-grain locks to fine-grain locks to lock-free operations) and correspondingly increasing levels of complexity. We present performance results that demonstrate the performance implications of the different approaches.