Challenges and issues of supporting task parallelism in MPI

  • Authors:
  • Márcia C. Cera;João V. F. Lima;Nicolas Maillard;Philippe O. A. Navaux

  • Affiliations:
  • Universidade Federal do Rio Grande do Sul, Brazil;Universidade Federal do Rio Grande do Sul, Brazil;Universidade Federal do Rio Grande do Sul, Brazil;Universidade Federal do Rio Grande do Sul, Brazil

  • Venue:
  • EuroMPI'10 Proceedings of the 17th European MPI users' group meeting conference on Recent advances in the message passing interface
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Task parallelism deals with the extraction of the potential parallelism of irregular structures, which vary according to the input data, through a definition of abstract tasks and their dependencies. Shared-memory APIs, such as OpenMP and TBB, support this model and ensure performance thanks to an efficient scheduling of tasks. In this work, we provide arguments favoring the support of task parallelism in MPI. We explain how native MPI can be used to define tasks, their dependencies, and their runtime scheduling. We also discuss performance issues. Our preliminary experiments show that it is possible to implement efficient task-parallel MPI programs and to increase the range of applications covered by the MPI standard.