Toward a principled framework for benchmarking consistency

  • Authors:
  • Muntasir Raihan Rahman;Wojciech Golab;Alvin AuYoung;Kimberly Keeton;Jay J. Wylie

  • Affiliations:
  • HP Labs, Palo Alto and University of Illinois at Urbana Champaign;HP Labs, Palo Alto;HP Labs, Palo Alto;HP Labs, Palo Alto;HP Labs, Palo Alto

  • Venue:
  • HotDep'12 Proceedings of the Eighth USENIX conference on Hot Topics in System Dependability
  • Year:
  • 2012

Quantified Score

Hi-index 0.03

Visualization

Abstract

Large-scale key-value storage systems sacrifice consistency in the interest of dependability (i.e., partition-tolerance and availability), as well as performance (i.e., latency). Such systems provide eventual consistency, which--to this point--has been difficult to quantify in real systems. Given the many implementations and deployments of eventually-consistent systems (e.g., NoSQL systems), attempts have been made to measure this consistency empirically, but they suffer from important drawbacks. For example, state-of-the art consistency benchmarks exercise the system only in restricted ways and disrupt the workload, which limits their accuracy. In this paper, we take the position that a consistency benchmark should paint a comprehensive picture of the relationship between the storage system under consideration, the workload, the pattern of failures, and the consistency observed by clients. To illustrate our point, we first survey prior efforts to quantify eventual consistency. We then present a benchmarking technique that overcomes the shortcomings of existing techniques to measure the consistency observed by clients as they execute the workload under consideration. This method is versatile and minimally disruptive to the system under test. As a proof of concept, we demonstrate this tool on Cassandra.