HELIX: automatic parallelization of irregular programs for chip multiprocessing

  • Authors:
  • Simone Campanoni;Timothy Jones;Glenn Holloway;Vijay Janapa Reddi;Gu-Yeon Wei;David Brooks

  • Affiliations:
  • Harvard University, Cambridge;University of Cambridge, Cambridge, UK;Harvard University, Cambridge;The University of Texas at Austin, Austin;Harvard University, Cambridge;Harvard University, Cambridge

  • Venue:
  • Proceedings of the Tenth International Symposium on Code Generation and Optimization
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe and evaluate HELIX, a new technique for automatic loop parallelization that assigns successive iterations of a loop to separate threads. We show that the inter-thread communication costs forced by loop-carried data dependences can be mitigated by code optimization, by using an effective heuristic for selecting loops to parallelize, and by using helper threads to prefetch synchronization signals. We have implemented HELIX as part of an optimizing compiler framework that automatically selects and parallelizes loops from general sequential programs. The framework uses an analytical model of loop speedups, combined with profile data, to choose loops to parallelize. On a six-core Intel® Core i7-980X, HELIX achieves speedups averaging 2.25 x, with a maximum of 4.12x, for thirteen C benchmarks from SPEC CPU2000.