MPI: The Complete Reference
Evaluating Sparse Data Storage Techniques for MPI Groups and Communicators
ICCS '08 Proceedings of the 8th international conference on Computational Science, Part I
Proceedings of the 16th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Succinct representations of permutations
ICALP'03 Proceedings of the 30th international conference on Automata, languages and programming
Scalability of communicators and groups in MPI
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
Scalable memory use in MPI: a case study with MPICH2
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
Scaling performance tool MPI communicator management
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
Scalable algorithms for constructing balanced spanning trees on system-ranked process groups
EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
Enabling MPI interoperability through flexible communication endpoints
Proceedings of the 20th European MPI Users' Group Meeting
Hi-index | 0.00 |
We describe a more compact representation of MPI process groups based on strided, partial sequences that can support all group and communicator creation operations in time proportional to the size of the argument groups. The worst case lookup time (to determine the global processor id corresponding to a local process rank) is logarithmic, but often better (constant), and can be traded against maximum possible compaction. Many commonly used MPI process groups can be represented in constant space with constant lookup time, for instance the process group of MPI_COMM_WORLD, and all consecutive subgroups of this group, but also many, many others). The representation never uses more than one word per process, but often much less, and is in this sense strictly better than the trivial, often used representation by means of a simple mapping array. The data structure and operations have all been implemented, and experiments show very worthwhile space savings for classes of process groups that are believed to be typical of MPI applications.