Compilers: principles, techniques, and tools
Compilers: principles, techniques, and tools
Interprocedural dependence analysis and parallelization
SIGPLAN '86 Proceedings of the 1986 SIGPLAN symposium on Compiler construction
Communications of the ACM
Graphical development tools for network-based concurrent supercomputing
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
The CODE 2.0 graphical parallel programming language
ICS '92 Proceedings of the 6th international conference on Supercomputing
Fortran M: a language for modular parallel programming
Journal of Parallel and Distributed Computing
Optimal mapping of sequences of data parallel tasks
PPOPP '95 Proceedings of the fifth ACM SIGPLAN symposium on Principles and practice of parallel programming
Optimizing Supercompilers for Supercomputers
Optimizing Supercompilers for Supercomputers
High Performance Compilers for Parallel Computing
High Performance Compilers for Parallel Computing
Communication and memory requirements as the basis for mapping task and data parallel programs
Proceedings of the 1994 ACM/IEEE conference on Supercomputing
Task Parallelism in a High Performance Fortran Framework
IEEE Parallel & Distributed Technology: Systems & Technology
Task Parallelism and High-Performance Languages
IEEE Parallel & Distributed Technology: Systems & Technology
A Notation for Deterministic Cooperating Processes
IEEE Transactions on Parallel and Distributed Systems
Task Parallel Programming in Fx
Task Parallel Programming in Fx
Dependence analysis for subscripted variables and its application to program transformations
Dependence analysis for subscripted variables and its application to program transformations
Hi-index | 0.00 |
In task parallel language like Fortran M, programmer writes a task parallel program using parallel constructs. When some data dependencies exist between called procedures in various applications, it is difficult to write a task parallel program according to their dependencies. Therefore, it is desirous that compiler detects some implicit parallelisms and transforms a program into parallelized form. However current task parallel language compilers can't provide these works. In this paper, we classify the cases according to various dependence relations. Then we suggest parallelization methods which compiler detects the implicit parallelism for each cases and transforms into the task parallel constructs like PROCESSES block or PROCESSDO loop of Fortran M. Then we compare our methods with conventional parallelization and describe their benefits.