Amortized efficiency of list update and paging rules
Communications of the ACM
Disk cache—miss ratio analysis and design considerations
ACM Transactions on Computer Systems (TOCS)
A locally adaptive data compression scheme
Communications of the ACM
The design of the UNIX operating system
The design of the UNIX operating system
Caching in the Sprite network file system
ACM Transactions on Computer Systems (TOCS)
Data cache management using frequency-based replacement
SIGMETRICS '90 Proceedings of the 1990 ACM SIGMETRICS conference on Measurement and modeling of computer systems
The LRU-K page replacement algorithm for database disk buffering
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
RAID: high-performance, reliable secondary storage
ACM Computing Surveys (CSUR)
An optimality proof of the LRU-K page replacement algorithm
Journal of the ACM (JACM)
SIGMETRICS '99 Proceedings of the 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Principles of Optimal Page Replacement
Journal of the ACM (JACM)
A middleware system which intelligently caches query results
IFIP/ACM International Conference on Distributed systems platforms
The fractal structure of data reference: applications to the memory hierarchy
The fractal structure of data reference: applications to the memory hierarchy
SIGMETRICS '02 Proceedings of the 2002 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Operating Systems Theory
IEEE Transactions on Computers
2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
My Cache or Yours? Making Storage More Exclusive
ATEC '02 Proceedings of the General Track of the annual conference on USENIX Annual Technical Conference
The Multi-Queue Replacement Algorithm for Second Level Buffer Caches
Proceedings of the General Track: 2002 USENIX Annual Technical Conference
WSCLOCK—a simple and effective algorithm for virtual memory management
SOSP '81 Proceedings of the eighth ACM symposium on Operating systems principles
Characteristics of I/O Traffic in Personal Computer and Server
Characteristics of I/O Traffic in Personal Computer and Server
Cost-aware WWW proxy caching algorithms
USITS'97 Proceedings of the USENIX Symposium on Internet Technologies and Systems on USENIX Symposium on Internet Technologies and Systems
IEEE Transactions on Software Engineering
A study of replacement algorithms for a virtual-storage computer
IBM Systems Journal
Evaluation techniques for storage hierarchies
IBM Systems Journal
Managing IBM database 2 buffers to maximize performance
IBM Systems Journal
On the benefits of P2P cache capacity allocation
Proceedings of the 23rd International Teletraffic Congress
Virtual I/O caching: dynamic storage cache management for concurrent workloads
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
Research of hot-spot selection algorithm in virtual address switch
ISPA'05 Proceedings of the 2005 international conference on Parallel and Distributed Processing and Applications
A new cache model and replacement algorithm for network attached optical jukebox
WAIM'05 Proceedings of the 6th international conference on Advances in Web-Age Information Management
AD-LRU: An efficient buffer replacement algorithm for flash-based databases
Data & Knowledge Engineering
BEAST: a buffer replacement algorithm using spatial and temporal locality
ICCSA'06 Proceedings of the 2006 international conference on Computational Science and Its Applications - Volume Part II
BRUST: an efficient buffer replacement for spatial databases
ICCS'06 Proceedings of the 6th international conference on Computational Science - Volume Part I
SHiP: signature-based hit predictor for high performance caching
Proceedings of the 44th Annual IEEE/ACM International Symposium on Microarchitecture
Frugal storage for cloud file systems
Proceedings of the 7th ACM european conference on Computer Systems
ARC-H: Adaptive replacement cache management for heterogeneous storage devices
Journal of Systems Architecture: the EUROMICRO Journal
Foundations and Trends in Databases
Proceedings of the VLDB Endowment
A parallel page cache: IOPS and caching for multicore systems
HotStorage'12 Proceedings of the 4th USENIX conference on Hot Topics in Storage and File Systems
Optimal bypass monitor for high performance last-level caches
Proceedings of the 21st international conference on Parallel architectures and compilation techniques
The evicted-address filter: a unified mechanism to address both cache pollution and thrashing
Proceedings of the 21st international conference on Parallel architectures and compilation techniques
Enabling efficient OS paging for main-memory OLTP databases
Proceedings of the Ninth International Workshop on Data Management on New Hardware
Hi-index | 0.00 |
We consider the problem of cache management in a demand paging scenario with uniform page sizes. We propose a new cache management policy, namely, Adaptive Replacement Cache (ARC), that has several advantages. In response to evolving and changing access patterns, ARC dynamically, adaptively, and continually balances between the recency and frequency components in an online and selftuning fashion. The policy ARC uses a learning rule to adaptively and continually revise its assumptions about the workload. The policy ARC is empirically universal, that is, it empirically performs as well as a certain fixed replacement policy-even when the latter uses the best workload-specific tuning parameter that was selected in an offline fashion. Consequently, ARC works uniformly well across varied workloads and cache sizes without any need for workload specific a priori knowledge or tuning. Various policies such as LRU-2, 2Q, LRFU, and LIRS require user-defined parameters, and, unfortunately, no single choice works uniformly well across different workloads and cache sizes. The policy ARC is simple-to-implement and, like LRU, has constant complexity per request. In comparison, policies LRU-2 and LRFU both require logarithmic time complexity in the cache size. The policy ARC is scan-resistant: it allows one-time sequential requests to pass through without polluting the cache. On 23 real-life traces drawn from numerous domains, ARC leads to substantial performance gains over LRU for a wide range of cache sizes. For example, for a SPC1 like synthetic benchmark, at 4GB cache, LRU delivers a hit ratio of 9.19% while ARC achieves a hit ratio of 20.