Amortized efficiency of list update and paging rules
Communications of the ACM
A bridging model for parallel computation
Communications of the ACM
Optimal Partitioning of Cache Memory
IEEE Transactions on Computers
Competitive paging with locality of reference
Selected papers of the 23rd annual ACM symposium on Theory of computing
Randomized and multipointer paging with locality of reference
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
An analysis of dag-consistent distributed shared-memory algorithms
Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures
Off-line algorithms for the list update problem
Information Processing Letters
Online computation and competitive analysis
Online computation and competitive analysis
Application-Controlled Paging for a Shared Cache
SIAM Journal on Computing
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Offline List Update is NP-Hard
ESA '00 Proceedings of the 8th Annual European Symposium on Algorithms
On the Competitiveness of Linear Search
ESA '00 Proceedings of the 8th Annual European Symposium on Algorithms
Randomized online multi-threaded paging
Nordic Journal of Computing
Effectively sharing a cache among threads
Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures
Adaptive insertion policies for high performance caching
Proceedings of the 34th annual international symposium on Computer architecture
Application-controlled file caching policies
USTC'94 Proceedings of the USENIX Summer 1994 Technical Conference on USENIX Summer 1994 Technical Conference - Volume 1
Cooperative cache partitioning for chip multiprocessors
Proceedings of the 21st annual international conference on Supercomputing
Provably good multicore cache performance for divide-and-conquer algorithms
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Optimal speedup on a low-degree multi-core parallel architecture (LoPRAM)
Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures
Fundamental parallel algorithms for private-cache chip multiprocessors
Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures
Cache-efficient dynamic programming algorithms for multicores
Proceedings of the twentieth annual symposium on Parallelism in algorithms and architectures
Adaptive insertion policies for managing shared caches
Proceedings of the 17th international conference on Parallel architectures and compilation techniques
PIPP: promotion/insertion pseudo-partitioning of multi-core shared caches
Proceedings of the 36th annual international symposium on Computer architecture
A study of replacement algorithms for a virtual-storage computer
IBM Systems Journal
SHARP control: controlled shared cache management in chip multiprocessors
Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture
Double digest revisited: complexity and approximability in the presence of noisy data
COCOON'03 Proceedings of the 9th annual international conference on Computing and combinatorics
Low depth cache-oblivious algorithms
Proceedings of the twenty-second annual ACM symposium on Parallelism in algorithms and architectures
Geometric algorithms for private-cache chip multiprocessors
ESA'10 Proceedings of the 18th annual European conference on Algorithms: Part II
Brief announcement: paging for multicore processors
Proceedings of the twenty-third annual ACM symposium on Parallelism in algorithms and architectures
Online and offline access to short lists
MFCS'07 Proceedings of the 32nd international conference on Mathematical Foundations of Computer Science
Joint cache partition and job assignment on multi-core processors
WADS'13 Proceedings of the 13th international conference on Algorithms and Data Structures
Hi-index | 0.00 |
Paging for multi-core processors extends the classical paging problem to a setting in which several processes simultaneously share the cache. Recently, Hassidim proposed a model for multi-core paging [25], studying cache eviction policies for multi-cores under the traditional competitive analysis metric and showing that LRU is not competitive against an offline policy that has the power to arbitrarily delay request sequences to its advantage. While Hassidim brought attention to this problem, an effective and realistic model with accompanying competitive caching algorithms remains to be introduced. In this paper we propose a more conventional model in which requests must be served as they arrive. We study the problem of minimizing the number of faults, deriving bounds on the competitive ratios of natural strategies to manage the cache. We show that traditional online paging algorithms are not competitive in our model. We then study the offline paging problem and show that the problem of deciding if a request can be served such that at a given time each sequence has faulted at most a given number of times is NP-complete and that its optimization version is APX-hard (for an unbounded number of sequences). We show as well that although offline algorithms can benefit from properly aligning future requests by means of faults, an algorithm that does so by forcing faults on pages that it has in its cache has no advantage over an honest algorithm that evicts pages only when faults occur. Lastly, we describe offline algorithms for the decision problem and for minimizing the total number of faults that run in polynomial time in the length of the sequences.