Informing memory operations: memory performance feedback mechanisms and their applications

  • Authors:
  • Mark Horowitz;Margaret Martonosi;Todd C. Mowry;Michael D. Smith

  • Affiliations:
  • Stanford Univ., Stanford, CA;Princeton Univ., Princeton, NJ;Carnegie Mellon Univ., Pittsburgh, PA;Harvard Univ., Cambridge, MA

  • Venue:
  • ACM Transactions on Computer Systems (TOCS)
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

Memory latency is an important bottleneck in system performance that cannot be adequately solved by hardware alone. Several promising software techniques have been shown to address this problem successfully in specific situations. However, the generality of these software approaches has been limited because current architecturtes do not provide a fine-grained, low-overhead mechanism for observing and reacting to memory behavior directly. To fill this need, this article proposes a new class of memory operations called informing memory operations, which essentially consist of a memory operatin combined (either implicitly or explicitly) with a conditional branch-and-ink operation that is taken only if the reference suffers a cache miss. This article describes two different implementations of informing memory operations. One is based on a cache-outcome condition code, and the other is based on low-overhead traps. We find that modern in-order-issue and out-of-order-issue superscalar processors already contain the bulk of the necessary hardware support. We describe how a number of software-based memory optimizations can exploit informing memory operations to enhance performance, and we look at cache coherence with fine-grained access control as a case study. Our performance results demonstrate that the runtime overhead of invoking the informing mechanism on the Alpha 21164 and MIPS R10000 processors is generally small enough to provide considerable flexibility to hardware and software designers, and that the cache coherence application has improved performance compared to other current solutions. We believe that the inclusion of informing memory operations in future processors may spur even more innovative performance optimizations.