On the validity of flow-level tcp network models for grid and cloud simulations

  • Authors:
  • Pedro Velho;Lucas Mello Schnorr;Henri Casanova;Arnaud Legrand

  • Affiliations:
  • UFRGS, Institute of Informatics;UFRGS, Institute of Informatics, Porto Alegre, Brazil;Dept. of Computer and Information Sciences, University of Hawai‘i at Manoa, U.S.A;CNRS, Grenoble University, France

  • Venue:
  • ACM Transactions on Modeling and Computer Simulation (TOMACS)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Researchers in the area of grid/cloud computing perform many of their experiments using simulations that must capture network behavior. In this context, packet-level simulations, which are widely used to study network protocols, are too costly given the typical large scales of simulated systems and applications. An alternative is to implement network simulations with less costly flow-level models. Several flow-level models have been proposed and implemented in grid/cloud simulators. Surprisingly, published validations of these models, if any, consist of verifications for only a few simple cases. Consequently, even when they have been used to obtain published results, the ability of these simulators to produce scientifically meaningful results is in doubt. This work evaluates these state-of-the-art flow-level network models of TCP communication via comparison to packet-level simulation. While it is straightforward to show cases in which previously proposed models lead to good results, instead we follow the critical method, which places model refutation at the center of the scientific activity, and we systematically seek cases that lead to invalid results. Careful analysis of these cases reveals fundamental flaws and also suggests improvements. One contribution of this work is that these improvements lead to a new model that, while far from being perfect, improves upon all previously proposed models in the context of simulation of grids or clouds. A more important contribution, perhaps, is provided by the pitfalls and unexpected behaviors encountered in this work, leading to a number of enlightening lessons. In particular, this work shows that model validation cannot be achieved solely by exhibiting (possibly many) “good cases.” Confidence in the quality of a model can only be strengthened through an invalidation approach that attempts to prove the model wrong.