A work-efficient parallel breadth-first search algorithm (or how to cope with the nondeterminism of reducers)

  • Authors:
  • Charles E. Leiserson;Tao B. Schardl

  • Affiliations:
  • MIT CSAIL, Cambridge, MA, USA;MIT CSAIL, Cambridge, MA, USA

  • Venue:
  • Proceedings of the twenty-second annual ACM symposium on Parallelism in algorithms and architectures
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a "bag," in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores. Since PBFS employs a nonconstant-time "reducer" -- "hyperobject" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P V+E)/Dlg3(V/D).