Changing Interaction of Compiler and Architecture

  • Authors:
  • Sarita V. Adve;Doug Burger;Rudolf Eigenmann;Alasdair Rawsthorne;Michael D. Smith;Catherine H. Gebotys;Mahmut T. Kandemir;David J. Lilja;Alok N. Choudhary;Jesse Z. Fang;Pen-Chung Yew

  • Affiliations:
  • -;-;-;-;-;-;-;-;-;-;-

  • Venue:
  • Computer
  • Year:
  • 1997

Quantified Score

Hi-index 4.10

Visualization

Abstract

With recent developments in compilation technology and architectural design, the line between traditional hardware and software roles has become increasingly blurred. The compiler can now see the processor's inner structure, which lets architects exploit sophisticated program analysis techniques to hide branch and memory access delays, for example. Processors can now implement register renaming and dynamic instruction-scheduling algorithms directly in the hardware-something that was once exclusively the compiler's job. A similar shift is occurring in optimizing compilers for parallel machines. To parallelize a larger class of applications, compiler writers are moving beyond static transformations and exploring techniques that rely on runtime decisions or hardware support. This increased blurring of compile-time and runtime optimizations opens many new research opportunities, particularly for program optimization-a task typically performed entirely at compile time. This article describes an optimization continuum and shows how different classes of optimizations fall within it.