A large-scale study of failures in high-performance computing systems

  • Authors:
  • Bianca Schroeder;Garth A. Gibson

  • Affiliations:
  • Carnegie Mellon University;Carnegie Mellon University

  • Venue:
  • DSN '06 Proceedings of the International Conference on Dependable Systems and Networks
  • Year:
  • 2006

Quantified Score

Hi-index 0.02

Visualization

Abstract

Designing highly dependable systems requires a good understanding of failure characteristics. Unfortunately, little raw data on failures in large IT installations is publicly available. This paper analyzes failure data recently made publicy available by one of the largest high-performance computing sites. The data has been collected over the past 9 years at Los Alamos National Laboratory and includes 23000 failures recorded on more than 20 different systems, mostly large clusters of SMP and NUMA nodes. We study the statistics of the data, including the root cause of failures, the mean time between failures, and the mean time to repair. We find for example that average failure rates differ wildly across systems, ranging from 20-1000 failures per year, and that time between failures is modeled well by a Weibull distribution with decreasing hazard rate. From one system to another, mean repair time varies from less than an hour to more than a day, and repair times are well modeled by a lognormal distribution.