Improving performance of adaptive component-based dataflow middleware

  • Authors:
  • Timothy D. R. Hartley;Erik Saule;ímit V. Çatalyürek

  • Affiliations:
  • Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH, USA and Department of Biomedical Informatics, The Ohio State University, Columbus, OH, USA;Department of Biomedical Informatics, The Ohio State University, Columbus, OH, USA;Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH, USA and Department of Biomedical Informatics, The Ohio State University, Columbus, OH, USA

  • Venue:
  • Parallel Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Making the best use of modern computational resources for distributed applications requires expert knowledge of low-level programming tools, or a productive high-level and high-performance programming framework. Unfortunately, even state-of-the-art high-level frameworks still require the developer to conduct a tedious manual tuning step to find the work partitioning which gives the best application execution performance. Here, we present a novel framework, with which developers can easily create high-performance dataflow applications, without the tedious tuning process. We compare the performance of our approach to that of three distributed programming frameworks which differ significantly in their programming paradigm, their support for multi-core CPUs and accelerators, and their load-balancing approach. These three frameworks are DataCutter, a component-based dataflow framework, KAAPI, a framework using asynchronous function calls, and MR-MPI, a MapReduce implementation. By highly optimizing the implementations of three applications on the four frameworks and comparing the execution time performance of the runtime engines, we show their strengths and weaknesses. We show that our approach achieves good performance for a wide range of applications, with a much-reduced development cost.