TaP: table-based prefetching for storage caches

  • Authors:
  • Mingju Li;Elizabeth Varki;Swapnil Bhatia;Arif Merchant

  • Affiliations:
  • University of New Hampshire;University of New Hampshire;University of New Hampshire;Hewlett-Packard Labs

  • Venue:
  • FAST'08 Proceedings of the 6th USENIX Conference on File and Storage Technologies
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

TaP is a storage cache sequential prefetching and caching technique to improve the read-ahead cache hit rate and system response time. A unique feature of TaP is the use of a table to detect sequential access patterns in the I/O workload and to dynamically determine the optimum prefetch cache size. When compared to some popular prefetching techniques, TaP gives a better hit rate and response time while using a read cache that is often an order of magnitude smaller than that needed by other techniques. TaP is especially efficient when the I/O workload consists of interleaved requests from various applications, where only some of the applications are accessing their data sequentially. For example, TaP achieves the same hit rate as the other techniques with a cache length that is 100 times smaller than the cache needed by other techniques when the interleaved workload consists of 10% sequential application data and 90% random application data.