Compilers: principles, techniques, and tools
Compilers: principles, techniques, and tools
Interprocedural dependence analysis and parallelization
SIGPLAN '86 Proceedings of the 1986 SIGPLAN symposium on Compiler construction
Toward computer-aided programming for multiprocessors
Proceedings of the international workshop on Parallel algorithms & architectures
Interprocedural analysis based program restructuring
Proceedings of the international workshop on Parallel algorithms & architectures
Strategies for cache and local memory management by global program transformation
Journal of Parallel and Distributed Computing - Special Issue on Languages, Compilers and environments for Parallel Programming
On the SUP-INF Method for Proving Presburger Formulas
Journal of the ACM (JACM)
Deciding Linear Inequalities by Computing Loop Residues
Journal of the ACM (JACM)
Algebraic simplification: a guide for the perplexed
Communications of the ACM
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Speedup of ordinary programs
Dependence analysis for subscripted variables and its application to program transformations
Dependence analysis for subscripted variables and its application to program transformations
PLDI '91 Proceedings of the ACM SIGPLAN 1991 conference on Programming language design and implementation
The Omega test: a fast and practical integer programming algorithm for dependence analysis
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
A practical algorithm for exact array dependence analysis
Communications of the ACM
A general algorithm for data dependence analysis
ICS '92 Proceedings of the 6th international conference on Supercomputing
Array privatization for parallel execution of loops
ICS '92 Proceedings of the 6th international conference on Supercomputing
Cache coherence in large-scale shared-memory multiprocessors: issues and comparisons
ACM Computing Surveys (CSUR)
Lazy array data-flow dependence analysis
POPL '94 Proceedings of the 21st ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Symbolic analysis for parallelizing compilers
ACM Transactions on Programming Languages and Systems (TOPLAS)
A unified semantic approach for the vectorization and parallelization of generalized reductions
ICS '89 Proceedings of the 3rd international conference on Supercomputing
Data dependence analysis on multi-dimensional array references
ICS '89 Proceedings of the 3rd international conference on Supercomputing
An Efficient Data Dependence Analysis for Parallelizing Compilers
IEEE Transactions on Parallel and Distributed Systems
An Empirical Study of Fortran Programs for Parallelizing Compilers
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 0.02 |
The purpose of a vectorizer is to perform program restructuring in order to exhibit the most efficiently exploitable forms of vector loops. This is guided by a suitable form of semantic analysis, Dependence Testing, which must be precise in order to fully exploit the architecture. Most studies reduce this phase to the application of a series of explicitly computable arithmetic criteria. In many cases, this will fail if the criteria cannot be computed numerically, and contain symbols which cannot be evaluated. This also makes the use of other information pertaining to these symbols difficult. It is shown here that it is possible to extract symbolic equations from some of the classical criteria, merge them with other symbolic knowledge about the program, and use the global system to decide the non-existence of a dependence. In the context of the VATIL vectorizer [17, 16], it is also shown possible to control the computation cost, and obtain a good overall efficiency.