The design and evolution of Zipcode
Parallel Computing - Special issue: message passing interfaces
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
An evaluation of the message-passing interface
ACM SIGPLAN Notices
Highly efficient implementation of MPI point-to-point communication using remote memory operations
ICS '98 Proceedings of the 12th international conference on Supercomputing
Co-array Fortran for parallel programming
ACM SIGPLAN Fortran Forum
Compile/run-time support for threaded MPI execution on multiprogrammed shared memory machines
Proceedings of the seventh ACM SIGPLAN symposium on Principles and practice of parallel programming
The emergence of the MPI message passing standard for parallel computing
Computer Standards & Interfaces
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
Using MPI (2nd ed.): portable parallel programming with the message-passing interface
OMPI: optimizing MPI programs using partial evaluation
Supercomputing '96 Proceedings of the 1996 ACM/IEEE conference on Supercomputing
Using MPI-2: Advanced Features of the Message Passing Interface
Using MPI-2: Advanced Features of the Message Passing Interface
The High Performance FORTRAN Handbook
The High Performance FORTRAN Handbook
Portable Programs for Parallel Processors
Portable Programs for Parallel Processors
The Need for Openness in Standards
Computer
A Foundation for Advanced Compile-time Analysis of Linda Programs
Proceedings of the Fourth International Workshop on Languages and Compilers for Parallel Computing
Flattening on the Fly: Efficient Handling of MPI Derived Datatypes
Proceedings of the 6th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
A Standard Interface for Debugger Access to Message Queue Information in MPI
Proceedings of the 6th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Toward Scalable Performance Visualization with Jumpshot
International Journal of High Performance Computing Applications
PVFS: a parallel file system for linux clusters
ALS'00 Proceedings of the 4th annual Linux Showcase & Conference - Volume 4
Mobile Agents - The Right Vehicle for Distributed Sequential Computing
HiPC '02 Proceedings of the 9th International Conference on High Performance Computing
Formal Methods for MPI Programs
Electronic Notes in Theoretical Computer Science (ENTCS)
Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations
International Journal of High Performance Computing and Networking
High-performance high-volume layered corpora annotation
ACL-IJCNLP '09 Proceedings of the Third Linguistic Annotation Workshop
Formal specification of MPI 2.0: Case study in specifying a practical concurrent programming API
Science of Computer Programming
Toward reliable and efficient message passing software through formal analysis
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Advances in Engineering Software
Making time-stepped applications tick in the cloud
Proceedings of the 2nd ACM Symposium on Cloud Computing
Coordinating computation with communication
COORDINATION'06 Proceedings of the 8th international conference on Coordination Models and Languages
A coordination-based model-driven method for parallel application development
MODELS'09 Proceedings of the 2009 international conference on Models in Software Engineering
MPI 3 and beyond: why MPI is successful and what challenges it faces
EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
An integrated runtime scheduler for MPI
EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
Hi-index | 0.00 |
The Message Passing Interface (MPI) has been extremely successful as a portable way to program high-performance parallel computers. This success has occurred in spite of the view of many that message passing is difficult and that other approaches, including automatic parallelization and directive-based parallelism, are easier to use. This paper argues that MPI has succeeded because it addresses all of the important issues in providing a parallel programming model.