Principles of database buffer management
ACM Transactions on Database Systems (TODS)
Amortized efficiency of list update and paging rules
Communications of the ACM
SIGMOD '87 Proceedings of the 1987 ACM SIGMOD international conference on Management of data
The LRU-K page replacement algorithm for database disk buffering
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
The COMFORT automatic tuning project
Information Systems
Principles of Optimal Page Replacement
Journal of the ACM (JACM)
The working set model for program behavior
Communications of the ACM
Operating Systems Theory
2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
High performance data broadcasting systems
Mobile Networks and Applications
High Performance Data Broadcasting: A Comprehensive Systems' Perspective
MDM '01 Proceedings of the Second International Conference on Mobile Data Management
The performance impact of kernel prefetching on buffer cache replacement algorithms
SIGMETRICS '05 Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
On joining and caching stochastic streams
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
ARC: A Self-Tuning, Low Overhead Replacement Cache
FAST '03 Proceedings of the 2nd USENIX Conference on File and Storage Technologies
A page fault equation for modeling the effect of memory size
Performance Evaluation
Effectiveness of caching in a distributed digital library system
Journal of Systems Architecture: the EUROMICRO Journal
Program-counter-based pattern classification in buffer caching
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms
IEEE Transactions on Computers
A new approach to dynamic self-tuning of database buffers
ACM Transactions on Storage (TOS)
Mining Query Logs: Turning Search Usage Data into Knowledge
Foundations and Trends in Information Retrieval
Coordinated multimedia object replacement in transcoding proxies
The Journal of Supercomputing
Dual-layered file cache on cc-NUMA system
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
ARC: a self-tuning, low overhead replacement cache
FAST'03 Proceedings of the 2nd USENIX conference on File and storage technologies
USENIXATC'11 Proceedings of the 2011 USENIX conference on USENIX annual technical conference
Efficient stack distance computation for priority replacement policies
Proceedings of the 8th ACM International Conference on Computing Frontiers
Low-overhead decision support for dynamic buffer reallocation
Computer Science - Research and Development
Hi-index | 0.01 |
This paper analyzes a recently published algorithm for page replacement in hierarchical paged memory systems [O'Neil et al. 1993]. The algorithm is called the LRU-K method, and reduces to the well-known LRU (Least Recently Used) method for K = 1. Previous work [O'Neil et al. 1993; Weikum et al. 1994; Johnson and Shasha 1994] has shown the effectiveness for K 1 by simulation, especially in the most common case of K = 2. The basic idea in LRU-K is to keep track of the times of the last K references to memory pages, and to use this statistical information to rank-order the pages as to their expected future behavior. Based on this the page replacement policy decision is made: which memory-resident page to replace when a newly accessed page must be read into memory. In the current paper, we prove, under the assumptions of the independent reference model, that LRU-K is optimal. Specifically we show: given the times of the (up to) K most recent references to each disk page, no other algorithm A making decisions to keep pages in a memory buffer holding n - 1 pages based on this infomation can improve on the expected number of I/Os to access pages over the LRU-K algorithm using a memory buffer holding n pages. The proof uses the Bayesian formula to relate the space of actual page probabilities of the model to the space of observable page numbers on which the replacement decision is acutally made.