Parallelizing heavyweight debugging tools with mpiecho

  • Authors:
  • Barry Rountree;Todd Gamblin;Bronis R. De Supinski;Martin Schulz;David K. Lowenthal;Guy Cobb;Henry Tufo

  • Affiliations:
  • Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, CA 94550, United States;Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, CA 94550, United States;Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, CA 94550, United States;Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, CA 94550, United States;Department of Computer Science, University of Arizona, Tucson, AZ 85721, United States;Google, Inc.;Department of Computer Science, University of Colorado, Boulder, CO 80309, United States

  • Venue:
  • Parallel Computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Idioms created for debugging execution on single processors and multicore systems have been successfully scaled to thousands of processors, but there is little hope that this class of techniques can continue to be scaled out to tens of millions of cores. In order to allow development of more scalable debugging idioms we introduce mpiecho, a novel runtime platform that enables cloning of MPI ranks. Given identical execution on each clone, we then show how heavyweight debugging approaches can be parallelized, reducing their overhead to a fraction of the serialized case. We also show how this platform can be useful in isolating the source of hardware-based nondeterministic behavior and provide a case study based on a recent processor bug at LLNL. While total overhead will depend on the individual tool, we show that the platform itself contributes little: 512x tool parallelization incurs at worst 2x overhead across the NAS Parallel benchmarks, hardware fault isolation contributes at worst an additional 44% overhead. Finally, we show how mpiecho can lead to near-linear reduction in overhead when combined with maid, a heavyweight memory tracking tool provided with Intel's pin platform. We demonstrate overhead reduction from 1466% to 53% and from 740% to 14% for cg (class D, 64 processes) and lu (class D, 64 processes), respectively, using only an additional 64 cores.