I-structures: data structures for parallel computing
ACM Transactions on Programming Languages and Systems (TOPLAS)
Object-oriented system development
Object-oriented system development
Communication optimization and code generation for distributed memory machines
PLDI '93 Proceedings of the ACM SIGPLAN 1993 conference on Programming language design and implementation
An introduction to partial evaluation
ACM Computing Surveys (CSUR)
Efficient message passing interface (MPI) for parallel computing on clusters of workstations
Journal of Parallel and Distributed Computing - Special issue on workstation clusters and network-based computing
MagPIe: MPI's collective communication operations for clustered wide area systems
Proceedings of the seventh ACM SIGPLAN symposium on Principles and practice of parallel programming
OMPI: optimizing MPI programs using partial evaluation
Supercomputing '96 Proceedings of the 1996 ACM/IEEE conference on Supercomputing
Four-Ary Tree-Based Barrier Synchronization for 2D Meshes without Nonmember Involvement
IEEE Transactions on Computers - Special issue on the parallel architecture and compilation techniques conference
Algorithms for Supporting Compiled Communication
IEEE Transactions on Parallel and Distributed Systems
Static Communications in Parallel Scientific Propgrams
PARLE '94 Proceedings of the 6th International PARLE Conference on Parallel Architectures and Languages Europe
CC--MPI: a compiled communication capable MPI prototype for ethernet switched clusters
Proceedings of the ninth ACM SIGPLAN symposium on Principles and practice of parallel programming
I-Structure Software Cache: A Split-Phase Transaction Runtime Cache System
PACT '96 Proceedings of the 1996 Conference on Parallel Architectures and Compilation Techniques
Non-strict execution in parallel and distributed computing
International Journal of Parallel Programming
Hi-index | 0.00 |
The message-passing paradigm is now widely accepted and used mainly for inter-process communication in distributed memory parallel systems. However, one of its disadvantages is the high cost associated with the data exchange. Therefore, in this paper, we describe a message-passing optimization technique based on the exploitation of single-assignment and constant information properties to reduce the number of communications. Similar to the more general partial evaluation approach, technique evaluates local and remote memory operations when only part of the input is known or available; it further specializes the program with respect to the input data. It is applied to the programs, which use a distributed single-assignment memory system. Experimental results show a considerable speedup in programs running in computer systems with slow interconnection networks. We also show that single assignment memory systems can have better network latency tolerance and the overhead introduced by its management can be hidden.