Competitive paging with locality of reference
Selected papers of the 23rd annual ACM symposium on Theory of computing
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Optimal prefetching via data compression
Journal of the ACM (JACM)
Online computation and competitive analysis
Online computation and competitive analysis
Optimal Prediction for Prefetching in the Worst Case
SIAM Journal on Computing
Optimal read-once parallel disk scheduling
Proceedings of the sixth workshop on I/O in parallel and distributed systems
Near-Optimal Parallel Prefetching and Caching
SIAM Journal on Computing
Minimizing stall time in single and parallel disk systems
Journal of the ACM (JACM)
Optimal prefetching and caching for parallel I/O sytems
Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures
Truly online paging with locality of reference
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
Duality Between Prefetching and Queued Writing with Parallel Disks
SIAM Journal on Computing
On adequate performance measures for paging
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing
A study of replacement algorithms for a virtual-storage computer
IBM Systems Journal
Hi-index | 0.00 |
The high latencies for access to background memory like hard disks or flash memory can be reduced by caching or hidden by prefetching. We consider the problem of scheduling the resulting I/Os when the available fast cache memory is limited and when we have real-time constraints where for each requested data block we are given a time interval during which this block needs to be in main memory. We give a near linear time algorithm for this problem which produces a feasible schedule whenever one exists. Another algorithm additionally minimizes I/Os and still runs in polynomial-time. For the online variant of the problem, we give a competitive algorithm that uses lookahead and augmented disk speed. We show a tight relationship between the amount of lookahead and the speed required to get a competitive algorithm.