Strongly competitive algorithms for caching with pipelined prefetching

  • Authors:
  • Alexander Gaysinsky;Alon Itai;Hadas Shachnai

  • Affiliations:
  • Computer Science Department, The Technion, Haifa 32000, Israel;Computer Science Department, The Technion, Haifa 32000, Israel;Computer Science Department, The Technion, Haifa 32000, Israel

  • Venue:
  • Information Processing Letters - Devoted to the rapid publication of short contributions to information processing
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Suppose that a program makes a sequence of m accesses (references) to data blocks; the cache can hold k blocks. An access to a block in the cache incurs one time unit, and fetching a missing block incurs d time units. A fetch of a new block can be initiated while a previous fetch is in progress; thus, min{k, d} block fetches can be in progress simultaneously. Any sequence of block references is modeled as a walk on the access graph of the program. The goal is to find a policy for prefetching and caching, which minimizes the overall execution time of a given reference sequence. This study is motivated from the pipelined operation of modern memory controllers, and from program execution on last processors. In the offline case, we show that an algorithm proposed by Cao et al. [Proc. of SIGMETRICS, 1995, pp. 188-197] is optimal for this problem. In the online case, we give an algorithm that is within factor of 2 from the optimal in the set of online deterministic algorithms, for any access graph, and k, d ≥ 1. Better ratios are obtained for several classes of access graphs which arise in applications, including complete graphs and directed acyclic graphs (DAG).