Data-Flow Analysis for MPI Programs

  • Authors:
  • Michelle Mills Strout;Barbara Kreaseck;Paul D. Hovland

  • Affiliations:
  • Colorado State University, USA;La Sierra University, USA;Argonne National Laboratory, USA

  • Venue:
  • ICPP '06 Proceedings of the 2006 International Conference on Parallel Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.02

Visualization

Abstract

Message passing via MPI is widely used in singleprogram, multiple-data (SPMD) parallel programs. Existing data-flow frameworks do not model the semantics of message-passing SPMD programs, which can result in less precise and even incorrect analysis results. We present a data-flow analysis framework for performing interprocedural analysis of message-passing SPMD programs. The framework is based on the MPI-ICFG representation, which is an interprocedural control-flow graph (ICFG) augmented with communication edges between possible send and receive pairs and partial context sensitivity. We show how to formulate nonseparable data-flow analyses within our framework using reaching constants as a canonical example. We also formulate and provide experimental results for the nonseparable analysis, activity analysis. Activity analysis is a domain-specific analysis used to reduce the computation and storage requirements for automatically differentiated MPI programs. Automatic differentiation is important for application domains such as climate modeling, electronic device simulation, oil reservoir simulation, medical treatment planning and computational economics to name a few. Our experimental results show that using the MPI-ICFG data-flow analysis framework improves the precision of activity analysis and as a result significantly reduces memory requirements for the automatically differentiated versions of a set of parallel benchmarks, including some of the NAS Parallel Benchmarks.