High performance RDMA-based design of HDFS over InfiniBand

  • Authors:
  • N. S. Islam;M. W. Rahman;J. Jose;R. Rajachandrasekar;H. Wang;H. Subramoni;C. Murthy;D. K. Panda

  • Affiliations:
  • The Ohio State University;The Ohio State University;The Ohio State University;The Ohio State University;The Ohio State University;The Ohio State University;IBM T.J Watson Research Center, Yorktown Heights, NY;The Ohio State University

  • Venue:
  • SC '12 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hadoop Distributed File System (HDFS) acts as the primary storage of Hadoop and has been adopted by reputed organizations (Facebook, Yahoo! etc.) due to its portability and fault-tolerance. The existing implementation of HDFS uses Java-socket interface for communication which delivers suboptimal performance in terms of latency and throughput. For data-intensive applications, network performance becomes key component as the amount of data being stored and replicated to HDFS increases. In this paper, we present a novel design of HDFS using Remote Direct Memory Access (RDMA) over InfiniBand via JNI interfaces. Experimental results show that, for 5GB HDFS file writes, the new design reduces the communication time by 87% and 30% over 1Gigabit Ethernet (1GigE) and IP-over-InfiniBand (IPoIB), respectively, on QDR platform (32Gbps). For HBase, the Put operation performance is improved by 26% with our design. To the best of our knowledge, this is the first design of HDFS over InfiniBand networks.