Lazy task creation: a technique for increasing the granularity of parallel programs
LFP '90 Proceedings of the 1990 ACM conference on LISP and functional programming
Making asynchronous parallelism safe for the world
POPL '90 Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
SPLASH: Stanford parallel applications for shared-memory
ACM SIGARCH Computer Architecture News
The design and analysis of DASH: a scalable directory-based multiprocessor
The design and analysis of DASH: a scalable directory-based multiprocessor
PLDI '92 Proceedings of the ACM SIGPLAN 1992 conference on Programming language design and implementation
Parallelizing complex scans and reductions
PLDI '94 Proceedings of the ACM SIGPLAN 1994 conference on Programming language design and implementation
The SPLASH-2 programs: characterization and methodological considerations
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
M-Structures: Extending a Parallel, Non-strict, Functional Language with State
Proceedings of the 5th ACM Conference on Functional Programming Languages and Computer Architecture
Recognizing and Parallelizing Bounded Recurrences
Proceedings of the Fourth International Workshop on Languages and Compilers for Parallel Computing
Arbitrary Order Operations on Trees
Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing
Analysis of Dynamic Structures for Efficient Parallel Execution
Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing
An Efficient Shared Memory Layer for Distributed Memory Machines.
An Efficient Shared Memory Layer for Distributed Memory Machines.
Exploiting Commuting Operations in Parallelizing Serial Programs
Exploiting Commuting Operations in Parallelizing Serial Programs
Automatically Parallelizing Serial Programs Using Commutativity Analysis
Automatically Parallelizing Serial Programs Using Commutativity Analysis
Commutativity analysis: a new analysis framework for parallelizing compilers
PLDI '96 Proceedings of the ACM SIGPLAN 1996 conference on Programming language design and implementation
Commutativity analysis: a new analysis technique for parallelizing compilers
ACM Transactions on Programming Languages and Systems (TOPLAS)
Quasi-static scheduling for safe futures
Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming
HAWKEYE: effective discovery of dataflow impediments to parallelization
Proceedings of the 2011 ACM international conference on Object oriented programming systems languages and applications
Hi-index | 0.00 |
This paper introduces an analysis technique, commutativity analysis, for automatically parallelizing computations that manipulate dynamic, pointer-based data structures. Commutativity analysis views computations as composed of operations on objects. It then analyzes the program to discover when operations commute, i.e. leave the objects in the same state regardless of the order in which they execute. If all of the operations required to perform a given computation commute, the compiler can automatically generate parallel code. Commutativity analysis eliminates many of the limitations that have prevented existing compilers, which use data dependence analysis, from successfully parallelizing pointer-based applications. It enables compilers to parallelize computations that manipulate graphs and eliminates the need to analyze the data structure construction code to extract global properties of the data structure topology. This paper shows how to use symbolic execution and expression manipulation to statically determine that operations commute and how to exploit the extracted commutativity information to generate parallel code. It also presents performance results that demonstrate that commutativity analysis can be used to successfully parallelize the Barnes-Hut hierarchical N-body solver, an important scientific application that manipulates a complex pointer-based data structure.