TBF: A memory-efficient replacement policy for flash-based caches

  • Authors:
  • Biplob Debnath;Cristian Ungureanu;Akshat Aranya;Stephen Rago

  • Affiliations:
  • NEC Laboratories America;NEC Laboratories America;NEC Laboratories America;NEC Laboratories America

  • Venue:
  • ICDE '13 Proceedings of the 2013 IEEE International Conference on Data Engineering (ICDE 2013)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The performance and capacity characteristics of flash storage make it attractive to use as a cache. Recency-based cache replacement policies rely on an in-memory full index, typically a B-tree or a hash table, that maps each object to its recency information. Even though the recency information itself may take very little space, the full index for a cache holding N keys requires at least log N bits per key. This metadata overhead is undesirably high when used for very large flash-based caches, such as key-value stores with billions of objects. To solve this problem, we propose a new RAM-frugal cache replacement policy that approximates the least-recently-used (LRU) policy. It uses two in-memory Bloom sub-filters (TBF) for maintaining the recency information and leverages an on-flash key-value store to cache objects. TBF requires only one byte of RAM per cached object, making it suitable for implementing very large flash-based caches. We evaluate TBF through simulation on traces from several block stores and key-value stores, as well as evaluate it using the Yahoo! Cloud Serving Benchmark in a real system implementation. Evaluation results show that TBF achieves cache hit rate and operations per second comparable to those of LRU in spite of its much smaller memory requirements.