On multi-level exclusive caching: offline optimality and why promotions are better than demotions
FAST'08 Proceedings of the 6th USENIX Conference on File and Storage Technologies
Improving I/O performance of applications through compiler-directed code restructuring
FAST'08 Proceedings of the 6th USENIX Conference on File and Storage Technologies
Practical techniques for purging deleted data using liveness information
ACM SIGOPS Operating Systems Review - Research and developments in the Linux kernel
Prefetch throttling and data pinning for improving performance of shared caches
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Dynamic partitioning of the cache hierarchy in shared data centers
Proceedings of the VLDB Endowment
Profiler and compiler assisted adaptive I/O prefetching for shared storage caches
Proceedings of the 17th international conference on Parallel architectures and compilation techniques
CLIC: client-informed caching for storage servers
FAST '09 Proccedings of the 7th conference on File and storage technologies
DHIS: discriminating hierarchical storage
SYSTOR '09 Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference
MCC-DB: minimizing cache conflicts in multi-core processors for databases
Proceedings of the VLDB Endowment
SOPA: Selecting the optimal caching policy adaptively
ACM Transactions on Storage (TOS)
Adaptive multi-level cache allocation in distributed storage architectures
Proceedings of the 24th ACM International Conference on Supercomputing
Computation mapping for multi-level storage cache hierarchies
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
Cashing in on hints for better prefetching and caching in PVFS and MPI-IO
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
CacheCard: caching static and dynamic content on the NIC
Proceedings of the 5th ACM/IEEE Symposium on Architectures for Networking and Communications Systems
Management of Multilevel, Multiclient Cache Hierarchies with Application Hints
ACM Transactions on Computer Systems (TOCS)
Cost-aware caching schemes in heterogeneous storage systems
The Journal of Supercomputing
Differentiated storage services
SOSP '11 Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles
Virtual I/O caching: dynamic storage cache management for concurrent workloads
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
FlashTier: a lightweight, consistent and durable storage cache
Proceedings of the 7th ACM european conference on Computer Systems
ARC-H: Adaptive replacement cache management for heterogeneous storage devices
Journal of Systems Architecture: the EUROMICRO Journal
Compiler-directed file layout optimization for hierarchical storage systems
SC '12 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Enhancing both fairness and performance using rate-aware dynamic storage cache partitioning
DISCS-2013 Proceedings of the 2013 International Workshop on Data-Intensive Scalable Computing Systems
Flash caching on the storage client
USENIX ATC'13 Proceedings of the 2013 USENIX conference on Annual Technical Conference
Compiler-directed file layout optimization for hierarchical storage systems
Scientific Programming - Selected Papers from Super Computing 2012
Hi-index | 0.01 |
Multilevel caching, common in many storage configurations, introduces new challenges to traditional cache management: data must be kept in the appropriate cache and replication avoided across the various cache levels. Some existing solutions focus on avoiding replication across the levels of the hierarchy, working well without information about temporal locality-information missing at all but the highest level of the hierarchy. Others use application hints to influence cache contents. We present Karma, a global non-centralized, dynamic and informed management policy for multiple levels of cache. Karma leverages application hints to make informed allocation and replacement decisions in all cache levels, preserving exclusive caching and adjusting to changes in access patterns. We show the superiority of Karma through comparison to existing solutions including LRU, 2Q, ARC, MultiQ, LRU-SP, and Demote, demonstrating better cache performance than all other solutions and up to 85% better performance than LRU on representative workloads.