Experiments with test case generation and runtime analysis

  • Authors:
  • Cyrille Artho;Doron Drusinksy;Allen Goldberg;Klaus Havelund;Mike Lowry;Corina Pasareanu;Grigore Rosu;Willem Visser

  • Affiliations:
  • Computer Systems Institute, ETH Zurich, Zurich, Switzerland;Naval Postgraduate School, Monterey, California and Time Rover, Inc, Cupertino, California;Kestrel Technology, NASA Ames Research Center, Moffett Field, California;Kestrel Technology, NASA Ames Research Center, Moffett Field, California;NASA Ames Research Center, Moffett Field, California;Kestrel Technology, NASA Ames Research Center, Moffett Field, California;Department of Computer Science, University of Illinois at Urbana-Champaign;RIACS, NASA Ames Research Center, Moffett Field, California

  • Venue:
  • ASM'03 Proceedings of the abstract state machines 10th international conference on Advances in theory and practice
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Software testing is typically an ad hoc process where human testers manually write many test inputs and expected test results, perhaps automating their execution in a regression suite. This process is cumbersome and costly. This paper reports preliminary results on an approach to further automate this process. The approach consists of combining automated test case generation based on systematically exploring the program's input domain, with runtime analysis, where execution traces are monitored and verified against temporal logic specifications, or analyzed using advanced algorithms for detecting concurrency errors such as data races and deadlocks. The approach suggests to generate specifications dynamically per input instance rather than statically once-and-for-all. The paper describes experiments with variants of this approach in the context of two examples, a planetary rover controller and a space craft fault protection system.