OpenMP: An Industry-Standard API for Shared-Memory Programming
IEEE Computational Science & Engineering
IEEE Transactions on Knowledge and Data Engineering
A Preliminary Analysis of the MPI Queue Characteristics of Several Applications
ICPP '05 Proceedings of the 2005 International Conference on Parallel Processing
Automatic thread distribution for nested parallelism in OpenMP
Proceedings of the 19th annual international conference on Supercomputing
Principles of Concurrent and Distributed Programming (2nd Edition) (Prentice-Hall International Series in Computer Science)
Development of mixed mode MPI / OpenMP applications
Scientific Programming
Cray XT4: an early evaluation for petascale scientific simulation
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
Test suite for evaluating performance of multithreaded MPI communication
Parallel Computing
Fine-Grained Multithreading Support for Hybrid Threaded MPI Programming
International Journal of High Performance Computing Applications
Early experiments with the OpenMP/MPI hybrid programming model
IWOMP'08 Proceedings of the 4th international conference on OpenMP in a new era of parallelism
Portable explicit threading and concurrent programming for MPI applications
PPAM'11 Proceedings of the 9th international conference on Parallel Processing and Applied Mathematics - Volume Part II
A case for standard non-blocking collective operations
PVM/MPI'07 Proceedings of the 14th European conference on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Semi-sparse algorithm based on multi-layer optimization for recommender system
The Journal of Supercomputing
Hi-index | 0.00 |
Concurrency and parallelism have long been viewed as important, but somewhat distinct concepts. While concurrency is extensively used to amortize latency (for example, in web- and database-servers, user interfaces, etc.), parallelism is traditionally used to enhance performance through execution on multiple functional units. Motivated by an evolving application mix and trends in hardware architecture, there has been a push toward integrating traditional programming models for concurrency and parallelism. Use of conventional threads APIs (POSIX, OpenMP) with messaging libraries (MPI), however, leads to significant programmability concerns, owing primarily to their disparate programming models. In this paper, we describe a novel API and associated runtime for concurrent programming, called MPI Threads (MPIT), which provides a portable and reliable abstraction of low-level threading facilities. We describe various design decisions in MPIT, their underlying motivation, and associated semantics. We provide performance measurements for our prototype implementation to quantify overheads associated with various operations. Finally, we discuss two real-world use cases: an asynchronous message queue and a parallel information retrieval system. We demonstrate that MPIT provides a versatile, low overhead programming model that can be leveraged to program large parallel ensembles.