MPI-LAPI: An Efficient Implementation of MPI for IBM RS/6000 SP Systems
IEEE Transactions on Parallel and Distributed Systems
Managing security in high-performance distributed computations
Cluster Computing
Enabling technologies for Web-based ubiquitous supercomputing
HPDC '96 Proceedings of the 5th IEEE International Symposium on High Performance Distributed Computing
Software infrastructure for the I-WAY high-performance distributed computing experiment
HPDC '96 Proceedings of the 5th IEEE International Symposium on High Performance Distributed Computing
Grid resource management
Process virtualization of large-scale lidar data in a cloud computing environment
Computers & Geosciences
Hi-index | 0.00 |
Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be structured as collections of data-parallel tasks that communicate by using explicit message passing. Because the Message Passing Interface (MPI) defines standardized, familiar mechanisms for this communication model, we propose that HPF tasks communicate by making calls to a coordination library that provides an HPF binding for MPI. The semantics of a communication interface for sequential languages can be ambiguous when the interface is invoked from a parallel language; we show how these ambiguities can be resolved by describing one possible HPF binding for MPI. We then present the design of a library that implements this binding, discuss issues that influenced our design decisions, and evaluate the performance of a prototype HPF/MPI library using a communications microbenchmark and application kernel. Finally, we discuss how other MPI features might be incorporated into our design framework.