CloudRAMSort: fast and efficient large-scale distributed RAM sort on shared-nothing cluster

  • Authors:
  • Changkyu Kim;Jongsoo Park;Nadathur Satish;Hongrae Lee;Pradeep Dubey;Jatin Chhugani

  • Affiliations:
  • Intel Labs, Santa Clara, CA., USA;Intel Labs, Santa Clara, CA., USA;Intel Labs, Santa Clara, CA., USA;Google Research, Mountain View, USA;Intel Labs, Santa Clara, CA., USA;Intel Labs, Santa Clara, CA., USA

  • Venue:
  • SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sorting is a fundamental kernel used in many database operations. The total memory available across cloud computers is now sufficient to store even hundreds of terabytes of data in-memory. Applications requiring high-speed data analysis typically use in-memory sorting. The two most important factors in designing a high-speed in-memory sorting system are the single-node sorting performance and inter-node communication. In this paper, we present CloudRAMSort, a fast and efficient system for large-scale distributed sorting on shared-nothing clusters. CloudRAMSort performs multi-node optimizations by carefully overlapping computation with inter-node communication. The system uses a dynamic multi-stage random sampling approach for improved load-balancing between nodes. CloudRAMSort maximizes per-node efficiency by exploiting modern architectural features such as multiple cores and SIMD (Single-Instruction Multiple Data) units. This holistic combination results in the highest performing sorting performance on distributed shared-nothing platforms. CloudRAMSort sorts 1 Terabyte (TB) of data in 4.6 seconds on a 256-node Xeon X5680 cluster called the Intel Endeavor system. CloudRAMSort also performs well on heavily skewed input distributions, sorting 1 TB of data generated using Zipf distribution in less than 5 seconds. We also provide a detailed analytical model that accurately projects (within avg. 7%) the performance of CloudRAMSort with varying tuple sizes and interconnect bandwidths. Our analytical model serves as a useful tool to analyze performance bottlenecks on current systems and project performance with future architectural advances. With architectural trends of increasing number of cores, bandwidth, SIMD width, cache-sizes, and interconnect bandwidth, we believe CloudRAMSort would be the system of choice for distributed sorting of large-scale in-memory data of current and future systems