The STAPL plist

  • Authors:
  • Gabriel Tanase;Xiabing Xu;Antal Buss; Harshvardhan;Ioannis Papadopoulos;Olga Pearce;Timmie Smith;Nathan Thomas;Mauro Bianco;Nancy M. Amato;Lawrence Rauchwerger

  • Affiliations:
  • Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University;Parasol Lab, Dept. of Computer Science and Engineering, Texas A&M University

  • Venue:
  • LCPC'09 Proceedings of the 22nd international conference on Languages and Compilers for Parallel Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present the design and implementation of the staplpList, a parallel container that has the properties of a sequential list, but allows for scalable concurrent access when used in a parallel program. The Standard Template Adaptive Parallel Library (stapl) is a parallel programming library that extends C++ with support for parallelism. stapl provides a collection of distributed data structures (pContainers) and parallel algorithms (pAlgorithms) and a generic methodology for extending them to provide customized functionality. staplpContainers are thread-safe, concurrent objects, providing appropriate interfaces (e.g., views) that can be used by generic pAlgorithms. The pList provides stl equivalent methods, such as insert, erase, and splice, additional methods such as split, and efficient asynchronous (non-blocking) variants of some methods for improved parallel performance. We evaluate the performance of the staplpList on an IBM Power 5 cluster and on a CRAY XT4 massively parallel processing system. Although lists are generally not considered good data structures for parallel processing, we show that pList methods and pAlgorithms (p_generate and p_partial_sum) operating on pLists provide good scalability on more than 103 processors and that pList compares favorably with other dynamic data structures such as the pVector.