Strongly Competitive Algorithms for Caching with Pipelined Prefetching

  • Authors:
  • Alexander Gaysinsky;Alon Itai;Hadas Shachnai

  • Affiliations:
  • -;-;-

  • Venue:
  • ESA '01 Proceedings of the 9th Annual European Symposium on Algorithms
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Prefetching and caching are widely used for improving the performance of file systems. Recent studies have shown that it is important to integrate the two. In this model we consider the following problem. Suppose that a program makes a sequence of m accesses to data blocks. The cache can hold k blocks, where k m. An access to a block in the cache incurs one time unit, and fetching a missing block incurs d time units. A fetch of a new block can be initiated while a previous fetch is in progress. Thus, d block fetches can be in progress simultaneously. The locality of references to the cache is captured by the access graph model of [2]. The goal is to find a policy for prefetching and caching, which minimizes the overall execution time of a given reference sequence. This problem is called caching with locality and pipelined prefetching (CLPP). Our study is motivated from the pipelined operation of modern memory controllers, and program execution on fast processors. For the offline case we show that an algorithm introduced in [4] is optimal. In the on-line case we give an algorithm which is within factor of 2 from the optimal in the set of online deterministic algorithms, for any access graph, and k, d ≥ 1. Improved ratios are obtained for several important classes of access graphs, including complete graphs and directed acyclic graphs (DAG). Finally, we study the CLPP problem assuming a Markovian access model, on branch trees, which often arise in applications. We give algorithms whose expected performance ratios are within factor 2 from the optimal.