Commutativity Analysis: A Technique for Automatically Parallelizing Pointer-Based Computations

  • Authors:
  • Martin C. Rinard;Pedro C. Diniz

  • Affiliations:
  • -;-

  • Venue:
  • IPPS '96 Proceedings of the 10th International Parallel Processing Symposium
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces an analysis technique, commutativity analysis, for automatically parallelizing computations that manipulate dynamic, pointer-based data structures. Commutativity analysis views computations as composed of operations on objects. It then analyzes the program to discover when operations commute, i.e. leave the objects in the same state regardless of the order in which they execute. If all of the operations required to perform a given computation commute, the compiler can automatically generate parallel code. Commutativity analysis eliminates many of the limitations that have prevented existing compilers, which use data dependence analysis, from successfully parallelizing pointer-based applications. It enables compilers to parallelize computations that manipulate graphs and eliminates the need to analyze the data structure construction code to extract global properties of the data structure topology. This paper shows how to use symbolic execution and expression manipulation to statically determine that operations commute and how to exploit the extracted commutativity information to generate parallel code. It also presents performance results that demonstrate that commutativity analysis can be used to successfully parallelize the Barnes-Hut hierarchical N-body solver, an important scientific application that manipulates a complex pointer-based data structure.