A parallel page cache: IOPS and caching for multicore systems

  • Authors:
  • Da Zheng;Randal Burns;Alexander S. Szalay

  • Affiliations:
  • Department of Computer Science, Johns Hopkins University;Department of Computer Science, Johns Hopkins University;Department of Physics and Astronomy, Johns Hopkins University

  • Venue:
  • HotStorage'12 Proceedings of the 4th USENIX conference on Hot Topics in Storage and File Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a set-associative page cache for scalable parallelism of IOPS in multicore systems. The design eliminates lock contention and hardware cache misses by partitioning the global cache into many independent page sets, each requiring a small amount of metadata that fits in few processor cache lines. We extend this design with message passing among processors in a non-uniform memory architecture (NUMA). We evaluate the set-associative cache on 12-core processors and a 48- core NUMA to show that it realizes the scalable IOPS of direct I/O (no caching) and matches the cache hits rates of Linux's page cache. Set-associative caching maintains IOPS at scale in contrast to Linux for which IOPS crash beyond eight parallel threads.