Improving MPI communication overlap with collaborative polling

  • Authors:
  • Sylvain Didelot;Patrick Carribault;Marc Pérache;William Jalby

  • Affiliations:
  • Exascale Computing Research Center, Versailles, Francniversité de Versailles Saint-Quentin-en-Yvelines (UVSQ), Versailles, France;DAM, DIF, CEA, Arpajon, Francxascale Computing Research Center, Versailles, France;DAM, DIF, CEA, Arpajon, Francxascale Computing Research Center, Versailles, France;Exascale Computing Research Center, Versailles, Francniversité de Versailles Saint-Quentin-en-Yvelines (UVSQ), Versailles, France

  • Venue:
  • EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the rise of parallel applications complexity, the needs in term of computational power are continually growing. Recent trends in High-Performance Computing (HPC) have shown that improvements in single-core performance will not be sufficient to face the challenges of an Exascale machine: we expect an enormous growth of the number of cores as well as a multiplication of the data volume exchanged across compute nodes. To scale applications up to Exascale, the communication layer has to minimize the time while waiting for network messages. This paper presents a message progression based on Collaborative Polling which allows an efficient auto-adaptive overlapping of communication phases by performing computing. This approach is new as it increases the application overlap potential without introducing overheads of a threaded message progression.