Characterizing datasets for data deduplication in backup applications

  • Authors:
  • Nohhyun Park;David J. Lilja

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, 55455, USA;Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, 55455, USA

  • Venue:
  • IISWC '10 Proceedings of the IEEE International Symposium on Workload Characterization (IISWC'10)
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The compression and throughput performance of data deduplication system is directly affected by the input dataset. We propose two sets of evaluation metrics, and the means to extract those metrics, for deduplication systems. The First set of metrics represents how the composition of segments changes within the deduplication system over five full backups. This in turn allows more insights into how the compression ratio will change as data accumulate. The second set of metrics represents index table fragmentation caused by duplicate elimination and the arrival rate at the underlying storage system. We show that, while shorter sequences of unique data may be bad for index caching, they provide a more uniform arrival rate which improves the overall throughput. Finally, we compute the metrics derived from the datasets under evaluation and show how the datasets perform with different metrics. Our evaluation shows that backup datasets typically exhibit patterns in how they change over time and that these patterns are quantifiable in terms of how they affect the deduplication process. This quantification allows us to: 1) decide whether deduplication is applicable, 2) provision resources, 3) tune the data deduplication parameters and 4) potentially decide which portion of the dataset is best suited for deduplication.