C-pack: a high-performance microprocessor cache compression algorithm

  • Authors:
  • Xi Chen;Lei Yang;Robert P. Dick;Li Shang;Haris Lekatsas

  • Affiliations:
  • Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI;Google Inc., Mountain View, CA;Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI;Department of Electrical and Computer Engineering, University of Colorado, Boulder, CO;Vorras Corporation, Princeton, NJ

  • Venue:
  • IEEE Transactions on Very Large Scale Integration (VLSI) Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Microprocessor designers have been torn between tight constraints on the amount of on-chip cache memory and the high latency of off-chip memory, such as dynamic random access memory. Accessing off-chip memory generally takes an order of magnitude more time than accessing on-chip cache, and two orders of magnitude more time than executing an instruction. Computer systems and microarchitecture researchers have proposed using hardware data compression units within the memory hierarchies of microprocessors in order to improve performance, energy efficiency, and functionality. However, most past work, and all work on cache compression, has made unsubstantiated assumptions about the performance, power consumption, and area overheads of the proposed compression algorithms and hardware. It is not possible to determine whether compression at levels of the memory hierarchy closest to the processor is beneficial without understanding its costs. Furthermore, as we show in this paper, raw compression ratio is not always the most important metric. In this work, we present a lossless compression algorithm that has been designed for fast on-line data compression, and cache compression in particular. The algorithm has a number of novel features tailored for this application, including combining pairs of compressed lines into one cache line and allowing parallel compression of multiple words while using a single dictionary and without degradation in compression ratio. We reduced the proposed algorithm to a register transfer level hardware design, permitting performance, power consumption, and area estimation. Experiments comparing our work to previous work are described.