The case for compressed caching in virtual memory systems

  • Authors:
  • Paul R. Wilson;Scott F. Kaplan;Yannis Smaragdakis

  • Affiliations:
  • Dept. of Computer Sciences, University of Texas at Austin, Austin, Texas;Dept. of Computer Sciences, University of Texas at Austin, Austin, Texas;Dept. of Computer Sciences, University of Texas at Austin, Austin, Texas

  • Venue:
  • ATEC '99 Proceedings of the annual conference on USENIX Annual Technical Conference
  • Year:
  • 1999

Quantified Score

Hi-index 0.02

Visualization

Abstract

Compressed caching uses part of the available RAM to hold pages in compressed form, effectively adding a new level to the virtual memory hierarchy. This level attempts to bridge the huge performance gap between normal (uncompressed) RAM and disk. Unfortunately, previous studies did not show a consistent benefit from the use of compressed virtual memory. In this study, we show that technology trends favor compressed virtual memory--it is attractive now, offering reduction of paging costs of several tens of percent, and it will be increasingly attractive as CPU speeds increase faster than disk speeds. Two of the elements of our approach are innovative. First, we introduce novel compression algorithms suited to compressing in-memory data representations. These algorithms are competitive with more mature Ziv-Lempel compressors, and complement them. Second, we adaptively determine how much memory (if at all) should be compressed by keeping track of recent program behavior. This solves the problem of different programs, or phases within the same program, performing best for different amounts of compressed memory.