Using TPIE for processing massive data sets in C++

  • Authors:
  • Thomas Mølhave

  • Affiliations:
  • Duke University, Durham, NC

  • Venue:
  • SIGSPATIAL Special
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The adaptation of I/O-efficient algorithms in commercial and research applications can be facilitated by well-designed software libraries. The Templated Portable I/O Environment (TPIE) [2] for C++ is one such library based on the I/O-model of Agarwal and Vitter [3]. TPIE contains a number of powerful algorithms and data structures, enabling the user to quickly develop software that scales to very large data sets. Figure 1 illustrates the power of I/O-efficient algorithms in general and TPIE in particular, in this case using an external memory sorting algorithm and priority queue. As the data size grows close to the 6GiB of main memory of the computer, the sorting algorithm from the C++ Standard Template Library (STL), std::sort, slows down dramatically. Beyond that point using std::sort is infeasible as running times extend into days and weeks even for date sizes only slightly larger than the main memory. STL's std::priority queue behaves in the same way. The sorting algorithm and the priority queue from TPIE are well behaved, even as the size of the input data grows to terabytes. The STXXL [8] and LEDA-SM [5] libraries have goals and features similar to those of TPIE. STXXL aims to be very close to C++'s Standard Template Library (STL) but also offers pipe-lining and some usage of multiple cores. LEDA-SM is an extension to the Library of Efficient Data Types and Algorithms (LEDA) and consists of a number of I/O-efficient data structures and algorithms. Unfortunately, the LEDA-SM project is not active according to a statement on the project's website. On a slightly different level the cluster-friendly FG [4] library provides a framework for pipe-line structured programs that also scale to large data sets. A significantly reworked version 2.0 has been announced on the project website. Moving further into the distributed computing paradigm, the MapReduce [7] and Hadoop [1] frameworks are very popular for implementing algorithms on clusters with large numbers of computing nodes, but that is outside the scope of this article. We refer to [10] and the references therein for a more extensive survey of I/O-efficient algorithms and software libraries.