Streamlining data cache access with fast address calculation

  • Authors:
  • Todd M. Austin;Dionisios N. Pnevmatikatos;Gurindar S. Sohi

  • Affiliations:
  • Computer Sciences Department, University of Wisconsin-Madison, 1210 W. Dayton Street, Madison, WI;Computer Sciences Department, University of Wisconsin-Madison, 1210 W. Dayton Street, Madison, WI;Computer Sciences Department, University of Wisconsin-Madison, 1210 W. Dayton Street, Madison, WI

  • Venue:
  • ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

For many programs, especially integer codes, untolerated load instruction latencies account for a significant portion of total execution time. In this paper, we present the design and evaluation of a fast address generation mechanism capable of eliminating the delays caused by effective address calculation for many loads and stores.Our approach works by predicting early in the pipeline (part of) the effective address of a memory access and using this predicted address to speculatively access the data cache. If the prediction is correct, the cache access is overlapped with non-speculative effective address calculation. Otherwise, the cache is accessed again in the following cycle, this time using the correct effective address. The impact on the cache access critical path is minimal; the prediction circuitry adds only a single OR operation before cache access can commence. In addition, verification of the predicted effective address is completely decoupled from the cache access critical path.Analyses of program reference behavior and subsequent performance analysis of this approach shows that this design is a good one, servicing enough accesses early enough to result in speedups for all the programs we tested. Our approach also responds well to software support, which can significantly reduce the number of mispredicted effective addresses, in many cases providing better program speedups and reducing cache bandwidth requirements.