Competitive paging with locality of reference
Selected papers of the 23rd annual ACM symposium on Theory of computing
Randomized algorithms
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Randomized and multipointer paging with locality of reference
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Strongly Competitive Algorithms for Paging with Locality of Reference
SIAM Journal on Computing
Approximation algorithms for NP-hard problems
Approximation algorithms for NP-hard problems
Informed multi-process prefetching and caching
SIGMETRICS '97 Proceedings of the 1997 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Implementing cooperative prefetching and caching in a globally-managed memory system
SIGMETRICS '98/PERFORMANCE '98 Proceedings of the 1998 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Online computation and competitive analysis
Online computation and competitive analysis
Experimental studies of access graph based heuristics: beating the LRU standard?
SODA '97 Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms
Near-Optimal Parallel Prefetching and Caching
SIAM Journal on Computing
Minimizing stall time in single and parallel disk systems
Journal of the ACM (JACM)
Developments from a June 1996 seminar on Online algorithms: the state of the art
Developments from a June 1996 seminar on Online algorithms: the state of the art
Integrated prefetching and caching in single and parallel disk systems
Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures
Truly online paging with locality of reference
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
Suppose that a program makes a sequence of m accesses (references) to data blocks; the cache can hold k blocks. An access to a block in the cache incurs one time unit, and fetching a missing block incurs d time units. A fetch of a new block can be initiated while a previous fetch is in progress; thus, min{k, d} block fetches can be in progress simultaneously. Any sequence of block references is modeled as a walk on the access graph of the program. The goal is to find a policy for prefetching and caching, which minimizes the overall execution time of a given reference sequence. This study is motivated from the pipelined operation of modern memory controllers, and from program execution on last processors. In the offline case, we show that an algorithm proposed by Cao et al. [Proc. of SIGMETRICS, 1995, pp. 188-197] is optimal for this problem. In the online case, we give an algorithm that is within factor of 2 from the optimal in the set of online deterministic algorithms, for any access graph, and k, d ≥ 1. Better ratios are obtained for several classes of access graphs which arise in applications, including complete graphs and directed acyclic graphs (DAG).