Efficiently computing static single assignment form and the control dependence graph
ACM Transactions on Programming Languages and Systems (TOPLAS)
SUIF: an infrastructure for research on parallelizing and optimizing compilers
ACM SIGPLAN Notices
Combining analyses, combining optimizations
ACM Transactions on Programming Languages and Systems (TOPLAS)
Efficient program analysis using dependence flow graphs
Efficient program analysis using dependence flow graphs
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
High Performance Compilers for Parallel Computing
High Performance Compilers for Parallel Computing
High-Level Information - An Approach for Integrating Front-End and Back-End Compilers
ICPP '98 Proceedings of the 1998 International Conference on Parallel Processing
Designing the McCAT Compiler Based on a Family of Structured Intermediate Representations
Proceedings of the 5th International Workshop on Languages and Compilers for Parallel Computing
Polaris: Improving the Effectiveness of Parallelizing Compilers
LCPC '94 Proceedings of the 7th International Workshop on Languages and Compilers for Parallel Computing
PACT '96 Proceedings of the 1996 Conference on Parallel Architectures and Compilation Techniques
Superthreading: integrating compilation technology and processor architecture for cost-effective concurrent multithreading
Boosting the performance of multimedia applications using SIMD instructions
CC'05 Proceedings of the 14th international conference on Compiler Construction
A thread partitioning approach for speculative multithreading
The Journal of Supercomputing
Hi-index | 0.00 |
In this paper, we present the overall design of the Agassiz compiler [1]. The Agassiz compiler is an integrated compiler targeting the concurrent multithreaded architectures [12,13]. These architectures can exploit both looplevel and instruction-level parallelism for general-purpose applications (such as those in SPEC benchmarks). They also support various kinds of control and data speculation, runtime data dependence checking, and fast synchronization and communication mechanisms. The Agassiz compiler has a loop-level parallelizing compiler as its front-end and an instruction-level optimizing compiler as its back-end to support such architectures. In this paper, we focus on the IR design of the Agassiz compiler and describe how we support the front-end analyses, various optimization techniques, and source-to-source translation.