Optimistic parallelism requires abstractions

  • Authors:
  • Milind Kulkarni;Keshav Pingali;Bruce Walter;Ganesh Ramanarayanan;Kavita Bala;L. Paul Chew

  • Affiliations:
  • University of Texas, Austin;University of Texas, Austin;Cornell University, Ithaca, NY;Cornell University, Ithaca, NY;Cornell University, Ithaca, NY;Cornell University, Ithaca, NY

  • Venue:
  • Communications of the ACM - The Status of the P versus NP Problem
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The problem of writing software for multicore processors is greatly simplified if we could automatically parallelize sequential programs. Although auto-parallelization has been studied for many decades, it has succeeded only in a few application areas such as dense matrix computations. In particular, auto-parallelization of irregular programs, which are organized around large, pointer-based data structures like graphs, has seemed intractable. The Galois project is taking a fresh look at autoparallelization. Rather than attempt to parallelize all programs no matter how obscurely they are written, we are designing programming abstractions that permit programmers to highlight opportunities for exploiting parallelism in sequential programs, and building a runtime system that uses these hints to execute the program in parallel. In this paper, we describe the design and implementation of a system based on these ideas. Experimental results for two real-world irregular applications, a Delaunay mesh refinement application and a graphics application that performs agglomerative clustering, demonstrate that this approach is promising.