Dynamic tuning of online data migration policies in hierarchical storage systems using reinforcement learning

  • Authors:
  • David Vengerov

  • Affiliations:
  • Sun Microsystems Laboratories, Menlo Park, CA

  • Venue:
  • Dynamic tuning of online data migration policies in hierarchical storage systems using reinforcement learning
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-tier storage systems are becoming more and more widespread in the industry. In order to minimize the request response time in such systems, the most frequently accessed ("hot") files should be located in the fastest storage tiers (which are usually smaller and more expensive than the other tiers). Unfortunately, it is impossible to know ahead of time which files are going to be "hot", especially because the file access patterns change over time. This report presents a solution approach to this problem, where each tier uses Reinforcement Learning (RL) to learn its own cost function that predicts its future request response time, and the files are then migrated between the tiers so as to decrease the sum of costs of the tiers involved. A multi-tier storage system simulator was used to evaluate the migration policies tuned by RL, and such policies were shown to achieve a significant performance improvement over the best hand-crafted policies found for this domain.