Implementing and Benchmarking Derived Datatypes in Metacomputing

  • Authors:
  • Edgar Gabriel;Michael Resch;Roland Rühle

  • Affiliations:
  • -;-;-

  • Venue:
  • HPCN Europe 2001 Proceedings of the 9th International Conference on High-Performance Computing and Networking
  • Year:
  • 2001

Quantified Score

Hi-index 0.01

Visualization

Abstract

Flexible data structures have become a common tool of programming also in the field of engineering simulation and scientific simulation in the last years. Standard programming languages like Fortran and C or C++ allow to specify user defined datatypes for such structures. For parallel programming this leads to a special problem when it comes to exchanging data between processors. Regular data structures occupy contiguous space in memory and can thus be easily transferred to other processes when necessary. Irregular data structures, however, are more difficult to handle and the costs for communicating them may be rather high. MPI (Message Passing Interface) provides so called "derived datatypes" to overcome that problem, and for many systems these derived datatypes have been implemented efficiently. However, when running MPI on a cluster of systems in wide-area networks, such optimized implementations are not yet available and the overhead for communicating them may be substantial. The purpose of this paper is to show how this problem can be overcome by considering both the nature of the derived datatype and the cluster of systems used. We present an optimized implementation and show some results for clusters of supercomputers.