A Scalable Distributed Parallel Breadth-First Search Algorithm on BlueGene/L

  • Authors:
  • Andy Yoo;Edmond Chow;Keith Henderson;William McLendon;Bruce Hendrickson;Umit Catalyurek

  • Affiliations:
  • Lawrence Livermore National Laboratory, Livermore;D. E. Shaw Research and Development, New York;Lawrence Livermore National Laboratory, Livermore;Sandia National Laboratories, Albuquerque, NM;Sandia National Laboratories, Albuquerque, NM;Ohio State University, Columbus

  • Venue:
  • SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth- first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional 1D (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and 1D partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported.