ScalaTrace: tracing, analysis and modeling of HPC codes at scale

  • Authors:
  • Frank Mueller;Xing Wu;Martin Schulz;Bronis R. de Supinski;Todd Gamblin

  • Affiliations:
  • Dept. of Computer Science, North Carolina State University, Raleigh, NC;Dept. of Computer Science, North Carolina State University, Raleigh, NC;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, Livermore, CA;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, Livermore, CA;Lawrence Livermore National Laboratory, Center for Applied Scientific Computing, Livermore, CA

  • Venue:
  • PARA'10 Proceedings of the 10th international conference on Applied Parallel and Scientific Computing - Volume 2
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Characterizing the communication behavior of large-scale applications is a difficult and costly task due to code/system complexity and their long execution times. An alternative to running actual codes is to gather their communication traces and then replay them, which facilitates application tuning and future procurements. While past approaches lacked lossless scalable trace collection, we contribute an approach that provides orders of magnitude smaller, if not near constant-size, communication traces regardless of the number of nodes while preserving structural information. We introduce intra- and inter-node compression techniques of MPI events, we develop a scheme to preserve time and causality of communication events, and we present results of our implementation for BlueGene/L. Given this novel capability, we discuss its impact on communication tuning and on trace extrapolation. To the best of our knowledge, such a concise representation of MPI traces in a scalable manner combined with time-preserving deterministic MPI call replay are without any precedence.