Application-Controlled Paging for a Shared Cache
SIAM Journal on Computing
Effectively sharing a cache among threads
Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures
An analytical model for cache replacement policy performance
SIGMETRICS '06/Performance '06 Proceedings of the joint international conference on Measurement and modeling of computer systems
Low depth cache-oblivious algorithms
Proceedings of the twenty-second annual ACM symposium on Parallelism in algorithms and architectures
Paging for multi-core shared caches
Proceedings of the 3rd Innovations in Theoretical Computer Science Conference
Hi-index | 0.00 |
Paging for multicore processors extends the classical paging problem to a setting in which several processes simultaneously share the cache. Recently, Hassidim [6] studied cache eviction policies for multicores under the traditional competitive analysis metric, showing that LRU is not competitive against an offline policy that has the power of arbitrarily delaying request sequences to its advantage. In this paper we study caching under the more conservative model in which requests must be served as they arrive. We derive bounds on the competitive ratios of natural strategies to manage the cache, and we show that the offline problem is NP-complete, but that it admits an algorithm that runs in polynomial time in the length of the request sequences.