Sambamba: runtime adaptive parallel execution

  • Authors:
  • Kevin Streit;Clemens Hammacher;Andreas Zeller;Sebastian Hack

  • Affiliations:
  • Saarland University, Saarbrücken, Germany;Saarland University, Saarbrücken, Germany;Saarland University, Saarbrücken, Germany;Saarland University, Saarbrücken, Germany

  • Venue:
  • Proceedings of the 3rd International Workshop on Adaptive Self-Tuning Computing Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

How can we exploit a microprocessor as efficiently as possible? The "classic" approach is static optimization at compile-time, conservatively optimizing a program while keeping all possible uses in mind. Further optimization can only be achieved by anticipating the actual usage profile: If we know, for instance, that two computations will be independent, we can run them in parallel. However, brute force parallelization may slow down execution due to its large overhead. But as this depends on runtime features, such as structure and size of input data, parallel execution needs to dynamically adapt to the runtime situation at hand. Our SAMBAMBA framework implements such a dynamic adaptation for regular sequential C programs through adaptive dispatch between sequential and parallel function instances. In an evaluation of 14 programs, we show that automatic parallelization in combination with adaptive dispatch can lead to speed-ups of up to 5.2 fold on a quad-core machine with hyperthreading. At this point, we rely on programmer annotations but will get rid of this requirement as the platform evolves to support efficient speculative optimizations.