Optimal prepaging and font caching
ACM Transactions on Programming Languages and Systems (TOPLAS)
Amortized efficiency of list update and paging rules
Communications of the ACM
Competitive paging with locality of reference
Selected papers of the 23rd annual ACM symposium on Theory of computing
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Informed prefetching and caching
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
Integrated parallel prefetching and caching
Proceedings of the 1996 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Optimal prefetching via data compression
Journal of the ACM (JACM)
ACM Transactions on Computer Systems (TOCS)
A trace-driven comparison of algorithms for parallel prefetching and caching
OSDI '96 Proceedings of the second USENIX symposium on Operating systems design and implementation
Optimal Prediction for Prefetching in the Worst Case
SIAM Journal on Computing
Near-Optimal Parallel Prefetching and Caching
SIAM Journal on Computing
Minimizing stall time in single and parallel disk systems
Journal of the ACM (JACM)
Optimal prefetching and caching for parallel I/O sytems
Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures
Fido: A Cache That Learns to Fetch
VLDB '91 Proceedings of the 17th International Conference on Very Large Data Bases
Truly online paging with locality of reference
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
Online algorithms for prefetching and caching on parallel disks
Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures
Strongly competitive algorithms for caching with pipelined prefetching
Information Processing Letters - Devoted to the rapid publication of short contributions to information processing
The performance impact of kernel prefetching on buffer cache replacement algorithms
SIGMETRICS '05 Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
A buffer cache management scheme exploiting both temporal and spatial localities
ACM Transactions on Storage (TOS)
DULO: an effective buffer cache management scheme to exploit both temporal and spatial locality
FAST'05 Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies - Volume 4
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms
IEEE Transactions on Computers
Dma-based prefetching for i/o-intensive workloads on the cell architecture
Proceedings of the 5th conference on Computing frontiers
Tight competitive ratios for parallel disk prefetching and caching
Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures
/scratch as a cache: rethinking HPC center scratch storage
Proceedings of the 23rd international conference on Supercomputing
A Capabilities-Aware Programming Model for Asymmetric High-End Systems
CCGRID '10 Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing
Hi-index | 0.00 |
We study integrated prefetching and caching in single and parallel disk systems. There exist two very popular approximation algorithms called Aggressive and Conservative for minimizing the total elapsed time in the single disk problem. For D parallel disks, approximation algorithms are known for both the elapsed time and stall time performance measures. In particular, there exists a D-approximation algorithm for the stall time measure that uses D-1 additional memory locations in cache.In the first part of the paper we investigate approximation algorithms for the single disk problem. We give a refined analysis of the Aggressive algorithm, showing that the original analysis was too pessimistic. We prove that our new bound is tight. Additionally we present a new family of prefetching and caching strategies and give algorithms that perform better than Aggressive and Conservative.In the second part of the paper we investigate the problem of minimizing stall time in parallel disk systems. We present a polynomial time algorithm for computing a prefetching/caching schedule whose stall time is bounded by that of an optimal solution. The schedule uses at most 3(D-1) extra memory locations in cache. This is the first polynomial time algorithm for computing schedules with a minimum stall time. Our algorithm is based on the linear programming approach of [1]. However, in order to achieve minimum stall times, we introduce the new concept of synchronized schedules in which fetches on the D disks are performed completely in parallel.