Compilers: principles, techniques, and tools
Compilers: principles, techniques, and tools
Analysis of interprocedural side effects in a parallel programming environment
Journal of Parallel and Distributed Computing - Special Issue on Languages, Compilers and environments for Parallel Programming
Compiling Fortran D for MIMD distributed-memory machines
Communications of the ACM
A Compilation Approach for Fortran 90D/ HPF Compilers
Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing
A Unified Data-Flow Framework for Optimizing Communication
LCPC '94 Proceedings of the 7th International Workshop on Languages and Compilers for Parallel Computing
Compilation Techniques for Optimizing Communication on Distributed-Memory Systems
ICPP '93 Proceedings of the 1993 International Conference on Parallel Processing - Volume 02
Minimizing Data and Synchronization Costs in One-Way Communication
IEEE Transactions on Parallel and Distributed Systems
Optimization techniques for efficient HTA programs
Parallel Computing
Hi-index | 0.00 |
In distributed-memory message-passing architectures reducing communication cust is extremely important. In this paper, we present a technique tu optimize communication globally. Our approach is bused on a combination of linear algebra framework and dataflow analysis, and can take arbitray control flow into account. The distinctive features of the algorithm are its accuracy in keeping comnmnication set information and its support for general alignments and distributions including block-cyclic distributions. The method is currenrlv being implemented in the PARADIGM compiler. The preliminan, results show that the technique is efective in reducing both number us well as volume of the communication.