Interprocedural dependence analysis and parallelization
SIGPLAN '86 Proceedings of the 1986 SIGPLAN symposium on Compiler construction
Direct parallelization of call statements
SIGPLAN '86 Proceedings of the 1986 SIGPLAN symposium on Compiler construction
Automatic translation of FORTRAN programs to vector form
ACM Transactions on Programming Languages and Systems (TOPLAS)
The art of computer programming, volume 2 (3rd ed.): seminumerical algorithms
The art of computer programming, volume 2 (3rd ed.): seminumerical algorithms
Dependence graphs and compiler optimizations
POPL '81 Proceedings of the 8th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Projet Vesta: outil de calcul symbolique
Proceedings of the 6th Colloquium on International Symposium on Programming
A hierarchical basis for reordering transformations
POPL '84 Proceedings of the 11th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
A new polynomial-time algorithm for linear programming
STOC '84 Proceedings of the sixteenth annual ACM symposium on Theory of computing
Control and data dependence for program transformations.
Control and data dependence for program transformations.
Speedup of ordinary programs
Dependence analysis for subscripted variables and its application to program transformations
Dependence analysis for subscripted variables and its application to program transformations
Optimizing supercompilers for supercomputers
Optimizing supercompilers for supercomputers
Experiences with data dependence abstractions
ICS '91 Proceedings of the 5th international conference on Supercomputing
Efficient and exact data dependence analysis
PLDI '91 Proceedings of the ACM SIGPLAN 1991 conference on Programming language design and implementation
PLDI '91 Proceedings of the ACM SIGPLAN 1991 conference on Programming language design and implementation
The Omega test: a fast and practical integer programming algorithm for dependence analysis
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
A practical algorithm for exact array dependence analysis
Communications of the ACM
Array abstractions using semantic analysis of trapezoid congruences
ICS '92 Proceedings of the 6th international conference on Supercomputing
A general algorithm for data dependence analysis
ICS '92 Proceedings of the 6th international conference on Supercomputing
Data dependence analysis on multi-dimensional array references
ICS '89 Proceedings of the 3rd international conference on Supercomputing
An Efficient Data Dependence Analysis for Parallelizing Compilers
IEEE Transactions on Parallel and Distributed Systems
The Power Test for Data Dependence
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 0.02 |
Accurate data dependence analysis is the key function in vectorizing-restructuring compilers for supercomputers. However, the data dependence analysis algorithms currently available have limitations. Those that execute quickly, such as the Banerjee Test [3], [4], [21] are very limited in generality. Those that are general are too slow. 1 Compiler designers have been keenly aware of the need for an algorithm that is both general and fast. The algorithm that follows fills this need simply and uniformly.Aside from fast execution, this algorithm has three main features: It can deal with arbitrary linear constraints whose variables are not limited to loop-control variablesIt can deal with any number of these linear constraints simultaneously.It only looks at integer solutions.The algorithm organizes the multiple constraints into a constraint matrix and uses a method derived from linear programming techniques. We refer to this as the Constraint-Matrix algorithm. Burke and Cytron [6] have discussed linearization to deal with arrays of greater than one dimension. This approach was also referred to by Towle [18]. Linearization is limited unless additional constraints are added; it cannot deal with arbitrary additional constraints and it does not restrict its solutions to integers.The Constraint-Matrix algorithm is organized specifically for a goal that is different from the standard linear programming problem. This goal is to prove the lack of a solution [i.e. the lack of a dependence) rather than minimize an “objective function.” further, this algorithm is designed to “short-circuit” when its goal is reached, in a way that a standard linear programming algorithm cannot.In order to simplify the exposition, we begin with a precise definition of dependence. To provide some motivation for the later algorithm and to show the advantages of the matrix format, we then present two special case methods (Solvable-Matrix and Matrix-GCD). Finally we present the Constraint-Matrix algorithm.