Optimal prepaging and font caching
ACM Transactions on Programming Languages and Systems (TOPLAS)
Amortized efficiency of list update and paging rules
Communications of the ACM
Theory of linear and integer programming
Theory of linear and integer programming
Journal of Algorithms
Optimal prefetching via data compression (extended abstract)
SFCS '91 Proceedings of the 32nd annual symposium on Foundations of computer science
Practical prefetching via data compression
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
Practical prefetching techniques for multiprocessor file systems
Distributed and Parallel Databases - Selected papers from the first international conference on parallel and distributed information systems
Competitive paging with locality of reference
Selected papers of the 23rd annual ACM symposium on Theory of computing
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Informed prefetching and caching
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
Randomized and multipointer paging with locality of reference
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
ACM Transactions on Computer Systems (TOCS)
A trace-driven comparison of algorithms for parallel prefetching and caching
OSDI '96 Proceedings of the second USENIX symposium on Operating systems design and implementation
Parallel prefetching and caching
Parallel prefetching and caching
Experimental studies of access graph based heuristics: beating the LRU standard?
SODA '97 Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms
Optimal prediction for prefetching in the worst case
SODA '94 Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms
Fido: A Cache That Learns to Fetch
VLDB '91 Proceedings of the 17th International Conference on Very Large Data Bases
Near-optimal parallel prefetching and caching
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
PC-OPT: Optimal Offline Prefetching and Caching for Parallel I/O Systems
IEEE Transactions on Computers
An Experimental Study of Prefetching and Caching Algorithms for the World Wide Web
ALENEX '02 Revised Papers from the 4th International Workshop on Algorithm Engineering and Experiments
Integrated prefetching and caching in single and parallel disk systems
Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures
Strongly competitive algorithms for caching with pipelined prefetching
Information Processing Letters - Devoted to the rapid publication of short contributions to information processing
Integrated prefetching and caching in single and parallel disk systems
Information and Computation
Lexicographic QoS scheduling for parallel I/O
Proceedings of the seventeenth annual ACM symposium on Parallelism in algorithms and architectures
Scheduling with QoS in parallel I/O systems
SNAPI '04 Proceedings of the international workshop on Storage network architecture and parallel I/Os
Finding total unimodularity in optimization problems solved by linear programs
ESA'06 Proceedings of the 14th conference on Annual European Symposium - Volume 14
Adaptive prefetching algorithm in disk controllers
Performance Evaluation
Integrated prefetching and caching in single and parallel disk systems
Information and Computation
Scheduling multiple flows on parallel disks
HiPC'05 Proceedings of the 12th international conference on High Performance Computing
Real-time integrated prefetching and caching
Journal of Scheduling
Hi-index | 0.01 |
We study integrated prefetching and caching problems following the work of Cao et al. [1995] and Kimbrel and Karlin [1996]. Cao et al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of the processor stall times and the length of the request sequence to be served.We show that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin. For the parallel disk problem, we give an approximation algorithm for minimizing stall time. The solution uses a few extra memory blocks in cache. Stall time is an important and harder to approximate measure for this problem. All of our algorithms are based on a new approach which involves formulating the prefetching/caching problems as linear programs.