Parallelization libraries: Characterizing and reducing overheads

  • Authors:
  • Abhishek Bhattacharjee;Gilberto Contreras;Margaret Martonosi

  • Affiliations:
  • Rutgers University;Nvidia Corporation;Princeton University

  • Venue:
  • ACM Transactions on Architecture and Code Optimization (TACO)
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Creating efficient, scalable dynamic parallel runtime systems for chip multiprocessors (CMPs) requires understanding the overheads that manifest at high core counts and small task sizes. In this article, we assess these overheads on Intel's Threading Building Blocks (TBB) and OpenMP. First, we use real hardware and simulations to detail various scheduler and synchronization overheads. We find that these can amount to 47% of TBB benchmark runtime and 80% of OpenMP benchmark runtime. Second, we propose load balancing techniques such as occupancy-based and criticality-guided task stealing, to boost performance. Overall, our study provides valuable insights for creating robust, scalable runtime libraries.