Amortized efficiency of list update and paging rules
Communications of the ACM
The input/output complexity of sorting and related problems
Communications of the ACM
Journal of Algorithms
Competitive paging and dual-guided on-line weighted caching and watching algorithms
Competitive paging and dual-guided on-line weighted caching and watching algorithms
A study of integrated prefetching and caching strategies
Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Integrated parallel prefetching and caching
Proceedings of the 1996 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Simple randomized mergesort on parallel disks
Parallel Computing - Special double issue: parallel I/O
Competitive parallel disk prefetching and buffer management
Proceedings of the fifth workshop on I/O in parallel and distributed systems
Minimizing stall time in single and parallel disk systems
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Online computation and competitive analysis
Online computation and competitive analysis
On competitive on-line paging with lookahead
Theoretical Computer Science
Optimal read-once parallel disk scheduling
Proceedings of the sixth workshop on I/O in parallel and distributed systems
Tight Bounds for Prefetching and Buffer Management Algorithms for Parallel I/O Systems
IEEE Transactions on Parallel and Distributed Systems
Distribution sort with randomized cycle
SODA '01 Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
Optimal prefetching and caching for parallel I/O sytems
Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures
External memory algorithms and data structures: dealing with massive data
ACM Computing Surveys (CSUR)
Red-Black Prefetching: An Approximation Algorithm for Parallel Disk Scheduling
Proceedings of the 18th Conference on Foundations of Software Technology and Theoretical Computer Science
Integrated prefetching and caching in single and parallel disk systems
Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures
Near-optimal parallel prefetching and caching
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
Online algorithms for prefetching and caching on parallel disks
Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures
Algorithms and data structures for external memory
Foundations and Trends® in Theoretical Computer Science
Hi-index | 0.00 |
We consider the natural extension of the well-known single disk caching problem to the parallel disk I/O model (PDM) [17]. The main challenge is to achieve as much parallelism as possible and avoid I/O bottlenecks. We are given a fast memory (cache) of size M memory blocks along with a request sequence Σ =(b1,b2,...,bn) where each block bi resides on one of D disks. In each parallel I/O step, at most one block from each disk can be fetched. The task is to serve Σ in the minimum number of parallel I/Os. Thus, each I/O is analogous to a page fault. The difference here is that during each page fault, up to D blocks can be brought into memory, as long as all of the new blocks entering the memory reside on different disks. The problem has a long history [18, 12, 13, 26]. Note that this problem is non-trivial even if all requests in Σ are unique. This restricted version is called read-once. Despite the progress in the offline version [13, 15] and read-once version [12], the general online problem still remained open. Here, we provide comprehensive results with a full general solution for the problem with asymptotically tight competitive ratios. To exploit parallelism, any parallel disk algorithm needs a certain amount of lookahead into future requests. To provide effective caching, an online algorithm must achieve o(D) competitive ratio. We show a lower bound that states, for lookahead L ≤ M, any online algorithm must be Ω(D)-competitive. For lookahead L greater than M(1+1/ε), where ε is a constant, the tight upper bound of O(√MD/L) on competitive ratio is achieved by our algorithm SKEW. The previous algorithm tLRU [26] was O((MD/L)2/3)-competitive and this was also shown to be tight [26] for an LRU-based strategy. We achieve the tight ratio using a fairly different strategy than LRU. We also show tight results for randomized algorithms against oblivious adversary and give an algorithm achieving better bounds in the resource augmentation model.