Experimenting with software testbeds for evaluating new technologies

  • Authors:
  • Mikael Lindvall;Ioana Rus;Paolo Donzelli;Atif Memon;Marvin Zelkowitz;Aysu Betin-Can;Tevfik Bultan;Chris Ackermann;Bettina Anders;Sima Asgari;Victor Basili;Lorin Hochstein;Jörg Fellmann;Forrest Shull;Roseanne Tvedt;Daniel Pech;Daniel Hirschbach

  • Affiliations:
  • Fraunhofer Center for Experimental Software Engineering, College Park, USA;Fraunhofer Center for Experimental Software Engineering, College Park, USA;Computer Science Department, University of Maryland, College Park, USA 20742;Computer Science Department, University of Maryland, College Park, USA 20742 and Fraunhofer Center for Experimental Software Engineering, College Park, USA;Computer Science Department, University of Maryland, College Park, USA 20742 and Fraunhofer Center for Experimental Software Engineering, College Park, USA;Informatics Institute, Middle East Technical University, Ankara, Turkey;University of California at Santa Barbara, Santa Barbara, USA;Fraunhofer Center for Experimental Software Engineering, College Park, USA;Fraunhofer Center for Experimental Software Engineering, College Park, USA;Computer Science Department, University of Maryland, College Park, USA 20742;Computer Science Department, University of Maryland, College Park, USA 20742 and Fraunhofer Center for Experimental Software Engineering, College Park, USA;Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588-0115;Fraunhofer Center for Experimental Software Engineering, College Park, USA;Fraunhofer Center for Experimental Software Engineering, College Park, USA;Fraunhofer Center for Experimental Software Engineering, College Park, USA;Fraunhofer Center for Experimental Software Engineering, College Park, USA;Computer Science Department, University of Maryland, College Park, USA 20742

  • Venue:
  • Empirical Software Engineering
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The evolution of a new technology depends upon a good theoretical basis for developing the technology, as well as upon its experimental validation. In order to provide for this experimentation, we have investigated the creation of a software testbed and the feasibility of using the same testbed for experimenting with a broad set of technologies. The testbed is a set of programs, data, and supporting documentation that allows researchers to test their new technology on a standard software platform. An important component of this testbed is the Unified Model of Dependability (UMD), which was used to elicit dependability requirements for the testbed software. With a collection of seeded faults and known issues of the target system, we are able to determine if a new technology is adept at uncovering defects or providing other aids proposed by its developers. In this paper, we present the Tactical Separation Assisted Flight Environment (TSAFE) testbed environment for which we modeled and evaluated dependability requirements and defined faults to be seeded for experimentation. We describe two completed experiments that we conducted on the testbed. The first experiment studies a technology that identifies architectural violations and evaluates its ability to detect the violations. The second experiment studies model checking as part of design for verification. We conclude by describing ongoing experimental work studying testing, using the same testbed. Our conclusion is that even though these three experiments are very different in terms of the studied technology, using and re-using the same testbed is beneficial and cost effective.