On the Performance of Transparent MPI Piggyback Messages

  • Authors:
  • Martin Schulz;Greg Bronevetsky;Bronis R. Supinski

  • Affiliations:
  • Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, USA CA 94551;Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, USA CA 94551;Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, USA CA 94551

  • Venue:
  • Proceedings of the 15th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many tools, including performance analysis tools, tracing libraries and application level checkpointers, add piggyback data to messages. However, transparently implementing this functionality on top of MPI is not trivial and can severely reduce application performance. We study three transparent piggyback implementations on multiple production platforms and demonstrate that all are inefficient for some application scenarios. Overall, our results show that efficient piggyback support requires mechanisms within the MPI implementation and, thus, the interface should be extended to support them.