Optimal prepaging and font caching
ACM Transactions on Programming Languages and Systems (TOPLAS)
Amortized efficiency of list update and paging rules
Communications of the ACM
Competitive paging with locality of reference
Selected papers of the 23rd annual ACM symposium on Theory of computing
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Informed prefetching and caching
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
Integrated parallel prefetching and caching
Proceedings of the 1996 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Optimal prefetching via data compression
Journal of the ACM (JACM)
ACM Transactions on Computer Systems (TOCS)
A trace-driven comparison of algorithms for parallel prefetching and caching
OSDI '96 Proceedings of the second USENIX symposium on Operating systems design and implementation
Optimal Prediction for Prefetching in the Worst Case
SIAM Journal on Computing
Near-Optimal Parallel Prefetching and Caching
SIAM Journal on Computing
Minimizing stall time in single and parallel disk systems
Journal of the ACM (JACM)
Optimal prefetching and caching for parallel I/O sytems
Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures
Fido: A Cache That Learns to Fetch
VLDB '91 Proceedings of the 17th International Conference on Very Large Data Bases
Truly online paging with locality of reference
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
Algorithms and data structures for external memory
Foundations and Trends® in Theoretical Computer Science
Hi-index | 0.00 |
We study integrated prefetching and caching in single and parallel disk systems. In the first part of the paper, we investigate approximation algorithms for the single disk problem. There exist two very popular approximation algorithms called Aggressive and Conservative for minimizing the total elapsed time. We give a refined analysis of the Aggressive algorithm, improving the original analysis by Cao et al. We prove that our new bound is tight. Additionally, we present a new family of prefetching and caching strategies and give algorithms that perform better than Aggressive and Conservative. In the second part of the paper, we investigate the problem of minimizing stall time in parallel disk systems. We present a polynomial time algorithm for computing a prefetching/caching schedule whose stall time is bounded by that of an optimal solution. The schedule uses at most 2(D - 1) extra memory locations in cache. This is the first polynomial time algorithm that, using a small amount of extra resources, computes schedules whose stall times are bounded by that of optimal schedules not using extra resources. Our algorithm is based on the linear programming approach of [Journal of the ACM 47 (2000) 969]. However, in order to achieve minimum stall times, we introduce the new concept of synchronized schedules in which fetches on the D disks are performed completely in parallel.