Algorithms from P to NP (vol. 1): design and efficiency
Algorithms from P to NP (vol. 1): design and efficiency
Birthday paradox, coupon collectors, caching algorithms and self-organizing search
Discrete Applied Mathematics
On the distribution of search cost for the move-to-front rule
Random Structures & Algorithms
Characterizing reference locality in the WWW
DIS '96 Proceedings of the fourth international conference on on Parallel and distributed information systems
Probabilistic methods for web caching
Performance Evaluation
Cost-aware WWW proxy caching algorithms
USITS'97 Proceedings of the USENIX Symposium on Internet Technologies and Systems on USENIX Symposium on Internet Technologies and Systems
Analysis of Page Replacement Policies in the Fluid Limit
Operations Research
ACM SIGCOMM Computer Communication Review
Log-structured cache: trading hit-rate for storage performance (and winning) in mobile devices
Proceedings of the 1st Workshop on Interactions of NVM/FLASH with Operating Systems and Workloads
Performance evaluation of the random replacement policy for networks of caches
Performance Evaluation
Mortar: filling the gaps in data center memory
Proceedings of the 10th ACM SIGPLAN/SIGOPS international conference on Virtual execution environments
Hi-index | 0.00 |
We analyse a class of randomized Least Recently Used (LRU) cache replacement algorithms under the independent reference model with generalized Zipf's law request probabilities. The randomization was recently proposed for Web caching as a mechanism that discriminates between different document sizes. In particular, the cache maintains an ordered list of documents in the following way. When a document of size $s$ is requested and found in the cache, then with probability $p_s$ it is moved to the front of the cache; otherwise the cache stays unchanged. Similarly, if the requested document of size $s$ is not found in the cache, the algorithm places it with probability $p_s$ to the front of the cache or leaves the cache unchanged with the complementary probability $(1-p_s)$. The successive randomized decisions are independent and the corresponding success probabilities $p_s$ are completely determined by the size of the currently requested document. In the case of a replacement, the necessary number of documents that are least recently moved to the front of the cache are removed in order to accommodate the newly placed document.In this framework, we provide explicit asymptotic characterization of the cache fault probability. Using the derived result we prove that the asymptotic performance of this class of algorithms is optimized when the randomization probabilities are chosen to be inversely proportional to document sizes. In addition, for this optimized and easy-to-implement policy, we show that its performance is within a constant factor from the optimal static algorithm.