Cache-Memory Interfaces in Compressed Memory Systems

  • Authors:
  • Caroline D. Benveniste;Peter A. Franaszek;John T. Robinson

  • Affiliations:
  • IBM Research Division, Yorktown Heights, NY;IBM Research Division, Yorktown Heights, NY;IBM Research Division, Yorktown Heights, NY

  • Venue:
  • IEEE Transactions on Computers
  • Year:
  • 2001

Quantified Score

Hi-index 14.98

Visualization

Abstract

We consider a number of cache/memory hierarchy design issues in systems with compressed random access memories (C-RAMs) in which compression and decompression occur automatically to and from main memory. Using a C-RAM as main memory, the bulk of main memory contents are stored in a compressed format and dynamically decompressed to handle cache misses at the next higher level of memory. This is the general approach adopted in IBM's Memory Expansion Technology (MXT). The design of the main memory directory structures and storage allocation methods in such systems is described elsewhere; here, we focus on issues related to cache-memory interfaces. In particular, if the cache line size (of the cache or caches to which main memory data is transferred) is different than the size of the unit of compression in main memory, bandwidth and latency problems can occur. Another issue is that of guaranteed forward progress, that is, ensuring that modified lines can be written to the compressed main memory so that the system can continue operation even if overall compression deteriorates. We study several approaches for solving these problems, using trace-driven analysis to evaluate alternatives.