hFS: a hybrid file system prototype for improving small file and metadata performance

  • Authors:
  • Zhihui Zhang;Kanad Ghose

  • Affiliations:
  • State University of New York, Binghamton, NY;State University of New York, Binghamton, NY

  • Venue:
  • Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Two oft-cited file systems, the Fast File System (FFS) and the Log-Structured File System (LFS), adopt two sharply different update strategies---update-in-place and update-out-of-place. This paper introduces the design and implementation of a hybrid file system called hFS, which combines the strengths of FFS and LFS while avoiding their weaknesses. This is accomplished by distributing file system data into two partitions based on their size and type. In hFS, data blocks of large regular files are stored in a data partition arranged in a FFS-like fashion, while metadata and small files are stored in a separate log partition organized in the spirit of LFS but without incurring any cleaning overhead. This segregation makes it possible to use more appropriate layouts for different data than would otherwise be possible. In particular, hFS has the ability to perform clustered I/O on all kinds of data---including small files, metadata, and large files. We have implemented a prototype of hFS on FreeBSD and have compared its performance against three file systems, including FFS with Soft Updates, a port of NetBSD's LFS, and our lightweight journaling file system called yFS. Results on a number of benchmarks show that hFS has excellent small file and metadata performance. For example, hFS beats FFS with Soft Updates in the range from 53% to 63% in the PostMark benchmark.