Exploring the limits of prefetching

  • Authors:
  • Philip G. Emma;Allan Hartstein;Thomas R. Puzak;Vijayalakshmi Srinivasan

  • Affiliations:
  • IBM Research Division, Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, New York;IBM Research Division, Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, New York;IBM Research Division, Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, New York;IBM Research Division, Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, New York

  • Venue:
  • IBM Journal of Research and Development - Electrochemical technology in microelectronics
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We formulate a new approach for evaluating a prefetching algorithm. We first carry out a profiling run of a program to identify all of the misses and corresponding locations in the program where prefetches for the misses can be initiated. We then systematically control the number of misses that are prefetched, the timeliness of these prefetches, and the number of unused prefetches. We validate the accuracy of our approach by comparing it to one based on a Markov prefetch algorithm. This allows us to measure the potential benefit that any application can receive from prefetching and to analyze application behavior under conditions that cannot be explored with any known prefetching algorithm. Next, we analyze a system parameter that is vital to prefetching performance, the line transfer interval, which is the number of processor cycles required to transfer a cache line. This interval is determined by technology and bandwidth. We show that under ideal conditions, prefetching can remove nearly all of the stalls associated with cache misses. Unfortunately, real processor implementations are less than ideal. In particular, the trend in processor frequency is outrunning on-chip and off-chip bandwidths. Today, it is not uncommon for processor frequency to be three or four times bus frequency. Under these conditions, we show that nearly all of the performance benefits derived from prefetching are eroded, and in many cases prefetching actually degrades performance. We carry out quantitative and qualitative analyses of these tradeoffs and show that there is a linear relationship between overall performance and three metrics: percentage of misses prefetched, percentage of unused prefetches, and bandwidth.