Amortized efficiency of list update and paging rules
Communications of the ACM
Journal of Algorithms
RAID: high-performance, reliable secondary storage
ACM Computing Surveys (CSUR)
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Informed prefetching and caching
SOSP '95 Proceedings of the fifteenth ACM symposium on Operating systems principles
Simple randomized mergesort on parallel disks
Parallel Computing - Special double issue: parallel I/O
Minimizing stall time in single and parallel disk systems
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Tight Bounds for Prefetching and Buffer Management Algorithms for Parallel I/O Systems
IEEE Transactions on Parallel and Distributed Systems
Fast concurrent access to parallel disks
SODA '00 Proceedings of the eleventh annual ACM-SIAM symposium on Discrete algorithms
Near-Optimal Parallel Prefetching and Caching
SIAM Journal on Computing
Competitive parallel disk prefetching and buffer management
Journal of Algorithms
Random duplicate storage strategies for load balancing in multimedia servers
Information Processing Letters
On Competitive On-Line Paging with Lookahead
STACS '96 Proceedings of the 13th Annual Symposium on Theoretical Aspects of Computer Science
Hi-index | 0.89 |
Buffer management for a D-disk parallel I/O system is considered in the context of randomized placement of data on the disks. A simple prefetching and caching algorithm PHASE-LRU using bounded lookahead is described and analyzed. It is shown that PHASE-LRU performs an expected number of I/Os that is within a factor Θ(log D /log log D) of the number performed by an optimal off-line algorithm. In contrast, any deterministic buffer management algorithm with the same amount of lookahead must do at least Ω (√D) times the number of I/Os of the optimal.