Automatic translation of FORTRAN programs to vector form
ACM Transactions on Programming Languages and Systems (TOPLAS)
Communication effect basic linear algebra computations on hypercube architectures
Journal of Parallel and Distributed Computing
Automatic decomposition of scientific programs for parallel execution
POPL '87 Proceedings of the 14th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
Compiling C* programs for a hypercube multicomputer
PPEALS '88 Proceedings of the ACM/SIGPLAN conference on Parallel programming: experience with applications, languages and systems
An overview of the PTRAN analysis system for multiprocessing
Proceedings of the 1st International Conference on Supercomputing
Process decomposition through locality of reference
PLDI '89 Proceedings of the ACM SIGPLAN 1989 Conference on Programming language design and implementation
A methodology for parallelizing programs for multicomputers and complex memory multiprocessors
Proceedings of the 1989 ACM/IEEE conference on Supercomputing
Supporting shared data structures on distributed memory architectures
PPOPP '90 Proceedings of the second ACM SIGPLAN symposium on Principles & practice of parallel programming
A parallel language and its compilation to multiprocessor machines or VLSI
POPL '86 Proceedings of the 13th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
Optimizing Supercompilers for Supercomputers
Optimizing Supercompilers for Supercomputers
Optimal communication primitives and graph embeddings on hypercubes
Optimal communication primitives and graph embeddings on hypercubes
Efficient Doacross execution on distributed shared-memory multiprocessors
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
A static performance estimator to guide data partitioning decisions
PPOPP '91 Proceedings of the third ACM SIGPLAN symposium on Principles and practice of parallel programming
Compile-time generation of regular communications patterns
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
A methodology for high-level synthesis of communication on multicomputers
ICS '92 Proceedings of the 6th international conference on Supercomputing
Global optimizations for parallelism and locality on scalable parallel machines
PLDI '93 Proceedings of the ACM SIGPLAN 1993 conference on Programming language design and implementation
PPOPP '93 Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming
PARADIGM: a compiler for automatic data distribution on multicomputers
ICS '93 Proceedings of the 7th international conference on Supercomputing
Automatic data and computation decomposition on distributed memory parallel computers
ACM Transactions on Programming Languages and Systems (TOPLAS)
Distributed Memory Compiler Design For Sparse Problems
IEEE Transactions on Computers
IEEE Transactions on Parallel and Distributed Systems
A systematic approach to synthesize data alignment directives for distributed memory machines
Nordic Journal of Computing
Towards automatic translation of OpenMP to MPI
Proceedings of the 19th annual international conference on Supercomputing
The rise and fall of High Performance Fortran: an historical object lesson
Proceedings of the third ACM SIGPLAN conference on History of programming languages
An Approach To Data Distributions in Chapel
International Journal of High Performance Computing Applications
From FORTRAN 77 to locality-aware high productivity languages for peta-scale computing
Scientific Programming - Fortran Programming Language and Scientific Programming: 50 Years of Mutual Growth
Scientific Programming
Hi-index | 0.00 |
This paper addresses the problem of data distribution and communication synthesis in generating parallel programs targeted for massively parallel, distributed-memory machines. The source programs can be sequential, functional, or parallel programs based on a shared-memory model. Our approach is to analyze source program references and match syntactic reference patterns with appropriate aggregate communication routines which can be implemented efficiently on the target machine. We use an explicit communication metric to guide optimizations to reduce communication overhead. The target code with explicit communication is proven to be free from deadlock introduced by the compilation process.