Theory of linear and integer programming
Theory of linear and integer programming
POPL '88 Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Scanning polyhedra with DO loops
PPOPP '91 Proceedings of the third ACM SIGPLAN symposium on Principles and practice of parallel programming
Efficiently computing static single assignment form and the control dependence graph
ACM Transactions on Programming Languages and Systems (TOPLAS)
A practical algorithm for exact array dependence analysis
Communications of the ACM
Array-data flow analysis and its use in array privatization
POPL '93 Proceedings of the 20th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Parallelizing compilers: implementation and effectiveness
Parallelizing compilers: implementation and effectiveness
Some efficient solutions to the affine scheduling problem: I. One-dimensional time
International Journal of Parallel Programming
PPOPP '95 Proceedings of the fifth ACM SIGPLAN symposium on Principles and practice of parallel programming
Constraint-based array dependence analysis
Constraint-based array dependence analysis
Transitive closure of infinite graphs and its applications
International Journal of Parallel Programming - Special issue: selected papers from the eighth international workshop on languages and compilers for parallel computing
Journal of Parallel and Distributed Computing
Plugging anti and output dependence removal techniques into loop parallelization algorithm
Parallel Computing - Special double issue on environment and tools for parallel scientific computing
POPL '98 Proceedings of the 25th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Array SSA form and its use in parallelization
POPL '98 Proceedings of the 25th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Optimal fine and medium grain parallelism detection in polyhedral reduced dependence graphs
International Journal of Parallel Programming
Advanced compiler design and implementation
Advanced compiler design and implementation
Constraint-based array dependence analysis
ACM Transactions on Programming Languages and Systems (TOPLAS)
Parallel Programming with Polaris
Computer
Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing
Semi-automatic composition of loop transformations for deep parallelism and memory hierarchies
International Journal of Parallel Programming
A step towards unifying schedule and storage optimization
ACM Transactions on Programming Languages and Systems (TOPLAS)
Transitive closures of affine integer tuple relations and their overapproximations
SAS'11 Proceedings of the 18th international conference on Static analysis
Hi-index | 0.00 |
Memory expansions are classical means to extract parallelism from imperative programs. However, current techniques require some runtime mechanism to restore data flow when expansion maps have two definitions reaching the same use to two different memory locations (e.g., &phis; functions in the SSA framework). This paper presents an expansion framework for any type of data structure in any imperative program, without the need for dynamic data flow restoration. The key idea is to group together definitions that reach a common use. We show that such an expansion boils down to mapping each group to a memory cell.