Improving the execution time of global communication operations

  • Authors:
  • Matthias Kühnemann;Thomas Rauber;Gudula Rünger

  • Affiliations:
  • TU Chemnitz, Chemnitz, Germany;Universität Bayreuth, Bayreuth, Germany;TU Chemnitz, Chemnitz, Germany

  • Venue:
  • Proceedings of the 1st conference on Computing frontiers
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many parallel applications from scientific computing use MPI global communication operations to collect or distribute data. Since the execution times of these communication operations increase with the number of participating processors, scalability problems might occur. In this article, we show for different MPI implementations how the execution time of global communication operations can be significantly improved by a restructuring based on orthogonal processor structures. As platform, we consider a dual Xeon cluster, a Beowulf cluster and a Cray T3E with different MPI implementations. We show that the execution time of operations like MPI_Bcast() or MPI_Allgather() can be reduced by 40% and 70% on the dual Xeon cluster and the Beowulf cluster. But also on a Cray T3E a significant improvement can be obtained by a careful selection of the processor groups. We demonstrate that the optimized communication operations can be used to reduce the execution time of data parallel implementations of complex application programs without any other reordering of the computation and communication structure.