Practical prefetching via data compression

  • Authors:
  • Kenneth M. Curewitz;P. Krishnan;Jeffrey Scott Vitter

  • Affiliations:
  • Digital Equipment Corp., 146 Main Street, Maynard, MA;Dept. of Computer Science, Brown University, Providence, RI;Dept. of Computer Science, Duke University, Durham, NC

  • Venue:
  • SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important issue that affects response time performance in current OODB and hypertext systems is the I/O involved in moving objects from slow memory to cache. A promising way to tackle this problem is to use prefetching, in which we predict the user's next page requests and get those pages into cache in the background. Current databases perform limited prefetching using techniques derived from older virtual memory systems. A novel idea of using data compression techniques for prefetching was recently advocated in [KrV, ViK], in which prefetchers based on the Lempel-Ziv data compressor (the UNIX compress command) were shown theoretically to be optimal in the limit. In this paper we analyze the practical aspects of using data compression techniques for prefetching. We adapt three well-known data compressors to get three simple, deterministic, and universal prefetchers. We simulate our prefetchers on sequences of page accesses derived from the OO1 and OO7 benchmarks and from CAD applications, and demonstrate significant reductions in fault-rate. We examine the important issues of cache replacement, size of the data structure used by the prefetcher, and problems arising from bursts of “fast” page requests (that leave virtually no time between adjacent requests for prefetching and book keeping). We conclude that prediction for prefetching based on data compression techniques holds great promise.