Amortized efficiency of list update and paging rules
Communications of the ACM
Multiprocessor cache analysis using ATUM
ISCA '88 Proceedings of the 15th Annual International Symposium on Computer architecture
Competitive algorithms for on-line problems
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Competitive paging with locality of reference
STOC '91 Proceedings of the twenty-third annual ACM symposium on Theory of computing
On-line caching as cache size varies
SODA '91 Proceedings of the second annual ACM-SIAM symposium on Discrete algorithms
Strongly competitive algorithms for paging with locality of reference
SODA '92 Proceedings of the third annual ACM-SIAM symposium on Discrete algorithms
Journal of the ACM (JACM)
Bounding the diffuse adversary
Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms
On-line paging against adversarially biased random inputs
Journal of Algorithms
The working set model for program behavior
Communications of the ACM
On paging with locality of reference
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
SIAM Journal on Computing
Truly online paging with locality of reference
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
On the separation and equivalence of paging strategies
SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
Stochastic analyses for online combinatorial optimization problems
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Average-Case Competitive Analyses for One-Way Trading
COCOON '08 Proceedings of the 14th annual international conference on Computing and Combinatorics
Paging and list update under bijective analysis
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
On the relative dominance of paging algorithms
Theoretical Computer Science
Introduction to the SIGACT news online algorithms column
ACM SIGACT News
On Developing New Models, with Paging as a Case Study
ACM SIGACT News
On the relative dominance of paging algorithms
ISAAC'07 Proceedings of the 18th international conference on Algorithms and computation
Energy-efficient windows scheduling
SOFSEM'08 Proceedings of the 34th conference on Current trends in theory and practice of computer science
Closing the gap between theory and practice: new measures for on-line algorithm analysis
WALCOM'08 Proceedings of the 2nd international conference on Algorithms and computation
On certain new models for paging with locality of reference
WALCOM'08 Proceedings of the 2nd international conference on Algorithms and computation
Average-case competitive analyses for one-way trading
Journal of Combinatorial Optimization
An online algorithm optimally self-tuning to congestion for power management problems
WAOA'11 Proceedings of the 9th international conference on Approximation and Online Algorithms
Real-time integrated prefetching and caching
Journal of Scheduling
Paging and list update under bijective analysis
Journal of the ACM (JACM)
Optimal eviction policies for stochastic address traces
Theoretical Computer Science
Hi-index | 0.00 |
Memory management is a fundamental problem in computer architecture and operating systems. We consider a two-level memory system with fast, but small cache and slow, but large main memory. The underlying theoretical problem is known as the paging problem. A sequence of requests to pages has to be served by making each requested page available in the cache. A paging strategy replaces pages in the cache with requested ones. The aim is to minimize the number of page faults that occur whenever a requested page is not in the cache.Experience shows that the paging strategy LEAST-RECENTLY-USED (LRU) usually achieves a factor around 2 to 3 compared to the optimum number of faults. This contrasts the theoretical worst case, in which this factor can be as large as the cache size k.One difficulty in analyzing the paging problem was the lack of an appropriate lower bound for the minimum number of page faults. We address this issue and propose a general lower bound which provides insight into the global structure of a given request sequence. In addition, we derive a characterization for the number of faults incurred by LRU.We give a theoretical explanation why LRU performs well in practice. We classify the set of all request sequences according to certain parameters and prove a bound on the competitive ratio of LRU, which depends on them. This bound varies between 2 and k, i.e., it includes the worst-case, but explains for which sequences LRU achieves constant competitive ratio. The classification is motivated from the structure of request sequences of practical applications: locality of reference and characteristic data access patterns. We argue that this structure yields values around 2 for our bound. Indeed, it is between 2 and 5 in extensive practical experiments.Furthermore, we study the paging problem with variable cache size, which was already considered previously. We show that this approach is not appropriate to explain the usual good performance of LRU. We measure the performance of LRU with the expected competitive ratio E[ALG]/E[OPT] and the expected performance ratio E[ALG]/E[OPT] in a diffuse adversary model and compare both measures. Our analysis yields that the expected competitive ratio gives a misleading answer.