Efficient and correct execution of parallel programs that share memory
ACM Transactions on Programming Languages and Systems (TOPLAS)
Analyses and optimizations for shared address space programs
Journal of Parallel and Distributed Computing - Special issue on compilation techniques for distributed memory systems
Co-array Fortran for parallel programming
ACM SIGPLAN Fortran Forum
Type systems for distributed data structures
Proceedings of the 27th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
The implementation of MPI-2 one-sided communication for the NEC SX-5
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
Single sided MPI implementations for SUN MPIr
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
Learning from the Success of MPI
HiPC '01 Proceedings of the 8th International Conference on High Performance Computing
Proceedings of the 11 IPPS/SPDP'99 Workshops Held in Conjunction with the 13th International Parallel Processing Symposium and 10th Symposium on Parallel and Distributed Processing
A performance analysis of the Berkeley UPC compiler
ICS '03 Proceedings of the 17th annual international conference on Supercomputing
An Evaluation of Current High-Performance Networks
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
A New DMA Registration Strategy for Pinning-Based High Performance Networks
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Titanium Language Reference Manual
Titanium Language Reference Manual
GASNet Specification, v1.1
MPI: A Message-Passing Interface Standard
MPI: A Message-Passing Interface Standard
Type systems for distributed data sharing
SAS'03 Proceedings of the 10th international conference on Static analysis
High Performance Remote Memory Access Communication: The Armci Approach
International Journal of High Performance Computing Applications
Unifying UPC and MPI runtimes: experience with MVAPICH
Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
Poster: High-level, one-sided programming models on MPI: a case study with global arrays and NWChem
Proceedings of the 2011 companion on High Performance Computing Networking, Storage and Analysis Companion
Global Futures: A Multithreaded Execution Model for Global Arrays-based Applications
CCGRID '12 Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012)
Analysis of implementation options for MPI-2 one-sided
PVM/MPI'07 Proceedings of the 14th European conference on Recent Advances in Parallel Virtual Machine and Message Passing Interface
MPI 3 and beyond: why MPI is successful and what challenges it faces
EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
A remote memory access infrastructure for global address space programming models in FPGAs
Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays
Using MPI in high-performance computing services
Proceedings of the 20th European MPI Users' Group Meeting
Portable, MPI-interoperable coarray fortran
Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming
Hi-index | 0.00 |
MPI support is nearly ubiquitous on high-performance systems today and is generally highly tuned for performance. It would thus seem to offer a convenient 'portable network assembly language' to developers of parallel programming languages who wish to target different network architectures. Unfortunately, neither the traditional MPI 1.1 API nor the newer MPI 2.0 extensions for one-sided communication provide an adequate compilation target for global address space languages, and this is likely to be the case for many other parallel languages as well. Simulating one-sided communication under the MPI 1.1 API is too expensive, while the MPI 2.0 one-sided API imposes a number of significant restrictions on memory access patterns that would need to be incorporated at the language level, as a compiler cannot effectively hide them given current conflict and alias detection algorithms.