Metacomputing experience in a transatlantic wide area application test-bed
Future Generation Computer Systems - Special issue on metacomputing
The Problems and the Solutions of the Metacomputing Experiment in SC99
HPCN Europe 2000 Proceedings of the 8th International Conference on High-Performance Computing and Networking
Parallel zero-copy algorithms for fast Fourier transform and conjugate gradient using MPI datatypes
EuroMPI'10 Proceedings of the 17th European MPI users' group meeting conference on Recent advances in the message passing interface
Distributed application management in heterogeneous GRIDS
EuroWeb'02 Proceedings of the 2002 international conference on EuroWeb
Hi-index | 0.01 |
Flexible data structures have become a common tool of programming also in the field of engineering simulation and scientific simulation in the last years. Standard programming languages like Fortran and C or C++ allow to specify user defined datatypes for such structures. For parallel programming this leads to a special problem when it comes to exchanging data between processors. Regular data structures occupy contiguous space in memory and can thus be easily transferred to other processes when necessary. Irregular data structures, however, are more difficult to handle and the costs for communicating them may be rather high. MPI (Message Passing Interface) provides so called "derived datatypes" to overcome that problem, and for many systems these derived datatypes have been implemented efficiently. However, when running MPI on a cluster of systems in wide-area networks, such optimized implementations are not yet available and the overhead for communicating them may be substantial. The purpose of this paper is to show how this problem can be overcome by considering both the nature of the derived datatype and the cluster of systems used. We present an optimized implementation and show some results for clusters of supercomputers.