Failure correction techniques for large disk arrays

  • Authors:
  • G. A. Gibson;L. Hellerstein;R. M. Karp;D. A. Patterson

  • Affiliations:
  • Computer Science Division, Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA;Computer Science Division, Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA;Computer Science Division, Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA;Computer Science Division, Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA

  • Venue:
  • ASPLOS III Proceedings of the third international conference on Architectural support for programming languages and operating systems
  • Year:
  • 1989

Quantified Score

Hi-index 0.01

Visualization

Abstract

The ever increasing need for I/O bandwidth will be met with ever larger arrays of disks. These arrays require redundancy to protect against data loss. This paper examines alternative choices for encodings, or codes, that reliably store information in disk arrays. Codes are selected to maximize mean time to data loss or minimize disks containing redundant data, but are all constrained to minimize performance penalties associated with updating information or recovering from catastrophic disk failures. We also codes that give highly reliable data storage with low redundant data overhead for arrays of 1000 information disks.