HFS: a performance-oriented flexible file system based on building-block compositions

  • Authors:
  • Orran Krieger;Michael Stumm

  • Affiliations:
  • IBM T. J. Watson Research Center, Yorktown Heights, NY;Univ. of Toronto, Toronto, Ont., Canada

  • Venue:
  • ACM Transactions on Computer Systems (TOCS)
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Hurricane File System (HFS) is designed for (potentially large-scale) shared-memory multiprocessors. Its architecture is based on the principle that, in order to maximize performance for applications with diverse requirements, a file system must support a wide variety of file structures, file system policies, and I/O interfaces. Files in HFS are implemented using simple building blocks composed in potentially complex ways. This approach yields great flexibility, allowing an application to customize the structure and policies of a file to exactly meet its requirements. As an extreme example, HFS allows a file's structure to be optimized for concurrent random-access write-only operations by 10 threads, something no other file system can do. Similarly, the prefetching, locking, and file cache management policies can all be chosen to match an application's access pattern. In contrast, most parallel file systems support a single file structure and a small set of policies. We have implemented HFS as part of the Hurricane operating system running on the Hector shared-memory multiprocessor. We demonstrate that the flexibility of HFS comes with little processing or I/O overhead. We also show that for a number of file access patterns, HFS is able to deliver to the applications the full I/O bandwidth of the disks on our system.