Benefits of high speed interconnects to cluster file systems: a case study with lustre

  • Authors:
  • Weikuan Yu;Ranjit Noronha;Shuang Liang;Dhabaleswar K. Panda

  • Affiliations:
  • Network-Based Computing Lab, Dept. of Computer Sci. & Engineering, The Ohio State University;Network-Based Computing Lab, Dept. of Computer Sci. & Engineering, The Ohio State University;Network-Based Computing Lab, Dept. of Computer Sci. & Engineering, The Ohio State University;Network-Based Computing Lab, Dept. of Computer Sci. & Engineering, The Ohio State University

  • Venue:
  • IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cluster file systems and Storage Area Networks (SAN) make use of network IO to achieve higher IO bandwidth. Effective integration of networking mechanisms is important to their performance. In this paper, we perform an evaluation of a popular cluster file system, Lustre, over two of the leading high speed cluster interconnects: InfiniBand and Quadrics. Our evaluation is performed with both sequential IO and parallel IO benchmarks in order to explore the capacity of Lustre under different communication characteristics. Experimental results show that direct implementations of Lustre over both interconnects can improve its performance, compared to an IP emulation over InfiniBand (IPoIB). The performance of Lustre over Quadrics is comparable to that of Lustre over InfiniBand with the platforms we have. Latest InfiniBand products can embrace latest technologies, such as PCI-Express and DDR, and provide higher capacity. Our results show that over a Lustre file system with two Object Storage Servers (OSSs), InfiniBand with PCI-Express technology can improve Lustre write performance by 24%. Furthermore, our experimental results indicate that Lustre meta-data operations do not scale with an increasing number of OSSs, in spite of using high performance interconnects.