Compression in cache design

  • Authors:
  • Ali-Reza Adl-Tabatabai;Anwar M. Ghuloum;Shobhit O Kanaujia

  • Affiliations:
  • Intel Corporation;Intel Corporation;Intel Corporation

  • Venue:
  • Proceedings of the 21st annual international conference on Supercomputing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Increasing cache capacity via compression enables designers to improve performance of existing designs for small incremental cost, further leveraging the large die area invested in last level caches. This paper explores the compressed cache design space with focus on implementation feasibility. Our compression schemes use companion line pairs -- cache lines whose addresses differ by a single bit -- as candidates for compression. We propose two novel compressed cache organizations: the companion bit remapped cache and the pseudoassociative cache. Our cache organizations use fixed-width physical cache line implementation while providing a variablelength logical cache line organization, without changing the number of sets or ways and with minimal increase in state per tag. We evaluate banked and pairwise schemes as two alternatives for storing compressed companion pairs within a physical cache line. We evaluate companion line prefetching (CLP), a simple yet effective prefetching mechanism that works in conjunction with our compression scheme. CLP is nearly pollution free since it only prefetches lines that are compression candidates. Using a detailed cycle accurate IA-32 simulator, we measure the performance of several third-level compressed cache designs simulating a representative collection of workloads. Our experiments show that our cache compression designs improve IPC for all cache-sensitive workloads, even those with modest data compressibility. The pairwise pseudo-associative compressed cache organization with companion line prefetching is the best configuration, providing a mean IPC improvement of 19% for cache-sensitive workloads, and a best-case IPC improvement of 84%. Finally, our cache designs exhibit negligible overall IPC degradation for cache-insensitive workloads.