Quality-of-service for consistency of data geo-replication in cloud computing

  • Authors:
  • Sérgio Esteves;João Silva;Luís Veiga

  • Affiliations:
  • Instituto Superior Técnico, UTL / INESC-ID Lisboa, GSD, Lisbon, Portugal;Instituto Superior Técnico, UTL / INESC-ID Lisboa, GSD, Lisbon, Portugal;Instituto Superior Técnico, UTL / INESC-ID Lisboa, GSD, Lisbon, Portugal

  • Venue:
  • Euro-Par'12 Proceedings of the 18th international conference on Parallel Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Today we are increasingly more dependent on critical data stored in cloud data centers across the world. To deliver high-availability and augmented performance, different replication schemes are used to maintain consistency among replicas. With classical consistency models, performance is necessarily degraded, and thus most highly-scalable cloud data centers sacrifice to some extent consistency in exchange of lower latencies to end-users. More so, those cloud systems blindly allow stale data to exist for some constant period of time and disregard the semantics and importance data might have, which undoubtedly can be used to gear consistency more wisely, combining stronger and weaker levels of consistency. To tackle this inherent and well-studied trade-off between availability and consistency, we propose the use of VFC3, a novel consistency model for replicated data across data centers with framework and library support to enforce increasing degrees of consistency for different types of data (based on their semantics). It targets cloud tabular data stores, offering rationalization of resources (especially bandwidth) and improvement of QoS (performance, latency and availability), by providing strong consistency where it matters most and relaxing on less critical classes or items of data.