A scalable front-end architecture for fast instruction delivery

  • Authors:
  • Glenn Reinman;Todd Austin;Brad Calder

  • Affiliations:
  • Department of Computer Science and Engineering, University of California, San Diego;Microcomputer Research Labs, Intel Corporation;Department of Computer Science and Engineering, University of California, San Diego

  • Venue:
  • ISCA '99 Proceedings of the 26th annual international symposium on Computer architecture
  • Year:
  • 1999

Quantified Score

Hi-index 0.01

Visualization

Abstract

In the pursuit of instruction-level parallelism, significant demands are placed on a processor's instruction delivery mechanism. Delivering the performance necessary to meet future processor execution targets requires that the performance of the instruction delivery mechanism scale with the execution core. Attaining these targets is a challenging task due to I-cache misses, branch mispredictions, and taken branches in the instruction stream. To further complicate matters, a VLSI interconnect scaling trend is materializing that further limits the performance of front-end designs in future generation process technologies.To counter these challenges, we present a fetch architecture that permits a faster cycle time than previous designs and scales better with future process technologies. Our design, called the Fetch Target Buffer, is a multi-level fetch block-oriented predictor. We decouple the FTB from the instruction fetch and decode pipelines to afford it the fastest clock possible. Through cycle-based simulation and circuit-level delay analysis, we find that our multi-level FTB design is capable of delivering instructions 25% faster than the best single-level BTB-based pipeline configuration. Moreover, we show that our design scales better to future process technologies than traditional single-level designs.