Optimization of MPI collective communication on BlueGene/L systems

  • Authors:
  • George Almási;Philip Heidelberger;Charles J. Archer;Xavier Martorell;C. Chris Erway;José E. Moreira;B. Steinmacher-Burow;Yili Zheng

  • Affiliations:
  • IBM T.J. Watson Research Center, Yorktown Heights, NY;IBM T.J. Watson Research Center, Yorktown Heights, NY;IBM Systems and Technology Group, Rochester, MN;Universitad Politechnica de, Catalunia, Barcelona (Spain);Brown University, Providence, RI;IBM Systems and Technology Group, Rochester, MN;IBM Germany, Boeblingen, (Germany);Purdue University, West Lafayette, IN

  • Venue:
  • Proceedings of the 19th annual international conference on Supercomputing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

BlueGene/L is currently the world's fastest supercomputer. It consists of a large number of low power dual-processor compute nodes interconnected by high speed torus and collective networks, Because compute nodes do not have shared memory, MPI is the the natural programming model for this machine. The BlueGene/L MPI library is a port of MPICH2.In this paper we discuss the implementation of MPI collectives on BlueGene/L. The MPICH2 implementation of MPI collectives is based on point-to-point communication primitives. This turns out to be suboptimal for a number of reasons. Machine-optimized MPI collectives are necessary to harness the performance of BlueGene/L. We discuss these optimized MPI collectives, describing the algorithms and presenting performance results measured with targeted micro-benchmarks on real BlueGene/L hardware with up to 4096 compute nodes.