A case for non-blocking collective operations

  • Authors:
  • Torsten Hoefler;Jeffrey M. Squyres;Wolfgang Rehm;Andrew Lumsdaine

  • Affiliations:
  • Open Systems Lab, Indiana University, Bloomington, IN;Cisco Systems, San Jose, CA;Technical University of Chemnitz, Chemnitz, Germany;Open Systems Lab, Indiana University, Bloomington, IN

  • Venue:
  • ISPA'06 Proceedings of the 2006 international conference on Frontiers of High Performance Computing and Networking
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

Non-blocking collective operations for MPI have been in discussion for a long time. We want to contribute to this discussion and to give a rationale for the usage these operations and assess their possible benefits. A LogGP model for the CPU overhead of collective algorithms and a benchmark to measures it are provided and show a large potential to overlap communication and computation. We show that non-blocking collective operations can provide at least the same benefits as non-blocking point to point operations already do. Our claim is that actual CPU overhead for non-blocking collective operations depends on the message size and the communicator size and benefits especially highly scalable applications with huge communicators. We prove that the share of the overhead of the overall communication time of current blocking collective operations gets smaller with bigger communicators and larger messages. We show that the user level CPU overhead is less than 10% for MPICH2 and LAM/MPI using TCP/IP communication, which leads us to the conclusion that, by using non-blocking collective communication, ideally 90% idle CPU time can be freed for the application.