Amortized efficiency of list update and paging rules
Communications of the ACM
Journal of Algorithms
Competitive paging with locality of reference
Selected papers of the 23rd annual ACM symposium on Theory of computing
Experimental studies of access graph based heuristics: beating the LRU standard?
SODA '97 Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms
Competitive analysis of randomized paging algorithms
Theoretical Computer Science
SIAM Journal on Computing
Flexible reference trace reduction for VM simulations
ACM Transactions on Modeling and Computer Simulation (TOMACS)
The relative worst-order ratio applied to paging
Journal of Computer and System Sciences
Modern Operating Systems
A study of replacement algorithms for a virtual-storage computer
IBM Systems Journal
Algorithmica - Special issue: Algorithms, Combinatorics, & Geometry
Outperforming LRU via competitive analysis on parametrized inputs for paging
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
ONLINEMIN: a fast strongly competitive randomized paging algorithm
WAOA'11 Proceedings of the 9th international conference on Approximation and Online Algorithms
Hi-index | 0.00 |
In the field of online algorithms paging is a well studied problem. LRU is a simple paging algorithm which incurs few cache misses and supports efficient implementations. Algorithms outperforming LRU in terms of cache misses exist, but are in general more complex and thus not automatically better, since their increased runtime might annihilate the gains in cache misses. In this paper we focus on efficient implementations for the OnOPT class described in [13], particularly on an algorithm in this class, denoted RDM, that was shown to typically incur fewer misses than LRU. We provide experimental evidence on a wide range of cache traces showing that our implementation of RDM is competitive to LRU with respect to runtime. In a scenario incurring realistic time penalties for cache misses, we show that our implementation consistently outperforms LRU, even if the runtime of LRU is set to zero.