Quantifying instruction-level parallelism limits on an EPIC architecture

  • Authors:
  • Hsien-Hsin Lee;Youfeng Wu;G. Tyson

  • Affiliations:
  • Dept. of Electr. Eng. & Comput. Sci., Michigan Univ., Ann Arbor, MI, USA;-;-

  • Venue:
  • ISPASS '00 Proceedings of the 2000 IEEE International Symposium on Performance Analysis of Systems and Software
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

EPIC architectures rely heavily on state-of-the-art compiler technology to deliver optimal performance while keeping hardware design simple. It is generally believed that an optimizing compiler has an enormous scheduling window to exploit instruction-level parallelism (ILP) since the compiler orchestrates the entire program. Many state-of-the-art compilers typically confine optimizations to loop boundaries (e.g. software pipelining, trace scheduling, and loop unrolling) and function boundaries (e.g. loop peeling, loop exchanges, invariant hoisting, and global optimizations). Although techniques such as function inlining and interprocedural optimizations can alleviate these constraints to a limited extent, loop and function boundaries are often the real scopes of the compiler scheduler. Several previous ILP studies have explored the limits of parallelism on dynamic superscalar machines; however, those results are not applicable to EPIC architectures since they rely on dynamic scheduling, not static code scheduling by the compiler, to reorder instructions. In this paper, we evaluate the limits in ILP obtained through compiler scheduling alone. We quantify these limits as more restrictive scheduling constraints are imposed-starting from inter-procedural code scheduling, to intra-procedural and finally to loop-confined code scheduling.