Compilers: principles, techniques, and tools
Compilers: principles, techniques, and tools
Advanced compiler optimizations for supercomputers
Communications of the ACM - Special issue on parallelism
Measuring Parallelism in Computation-Intensive Scientific/Engineering Applications
IEEE Transactions on Computers
Supercompilers for parallel and vector computers
Supercompilers for parallel and vector computers
Foundations of computer science
Foundations of computer science
Dynamic dependency analysis of ordinary programs
ISCA '92 Proceedings of the 19th annual international symposium on Computer architecture
High Performance Compilers for Parallel Computing
High Performance Compilers for Parallel Computing
Data Structures and Algorithms
Data Structures and Algorithms
Introduction to Algorithms
Concurrency in Software Systems
Software Engineering, An Advanced Course, Reprint of the First Edition [February 21 - March 3, 1972]
The impact of x86 instruction set architecture on superscalar processing
Journal of Systems Architecture: the EUROMICRO Journal
Computer
Hi-index | 0.00 |
Computer architecture evaluation requires new tools that complement the customary simulations and, in this sense, the traditional Graph Theory can help to create a new frame for finegrain parallelism analysis of execution performance, a step beyond the classical static analysis performed by compilers. Starting off from Graph Theory basic foundations, this paper introduces the data dependence matrix D supported by the novel concept of the reduced valence. The matrix D characterizes a code sequence in a mathematical manner, is endowed with a number of properties and restrictions, and provides information about the ability of the code to be processed concurrently. Among other details, some low complexity techniques to calculate parallelism degree from the matrix D are presented.