Benchmarking graph-processing platforms: a vision

  • Authors:
  • Yong Guo;Ana Lucia Varbanescu;Alexandru Iosup;Claudio Martella;Theodore L. Willke

  • Affiliations:
  • TU Delft, Delft, Netherlands;University of Amsterdam, Amsterdam, Netherlands;TU Delft, Delft, Netherlands;VU University Amsterdam, Amsterdam, Netherlands;Systems Architecture Lab, Intel Corporation, Portland, USA

  • Venue:
  • Proceedings of the 5th ACM/SPEC international conference on Performance engineering
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Processing graphs, especially at large scale, is an increasingly useful activity in a variety of business, engineering, and scientific domains. Already, there are tens of graph-processing platforms, such as Hadoop, Giraph, GraphLab, etc., each with a different design and functionality. For graph-processing to continue to evolve, users have to find it easy to select a graph-processing platform, and developers and system integrators have to find it easy to quantify the performance and other non-functional aspects of interest. However, the state of performance analysis of graph-processing platforms is still immature: there are few studies and, for the few that exist, there are few similarities, and relatively little understanding of the impact of dataset and algorithm diversity on performance. Our vision is to develop, with the help of the performance-savvy community, a comprehensive benchmarking suite for graph-processing platforms. In this work, we take a step in this direction, by proposing a set of seven challenges, summarizing our previous work on performance evaluation of distributed graph-processing platforms, and introducing our on-going work within the SPEC Research Group's Cloud Working Group.