Providing Test Quality Feedback Using Static Source Code and Automatic Test Suite Metrics

  • Authors:
  • Nachiappan Nagappan;Laurie Williams;Jason Osborne;Mladen Vouk;Pekka Abrahamsson

  • Affiliations:
  • Microsoft Research;North Carolina State University;North Carolina State University;North Carolina State University;VTT Technical Research Center of Finland

  • Venue:
  • ISSRE '05 Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A classic question in software development is "How much testing is enough?" Aside from dynamic coverage-based metrics, there are few measures that can be used to provide guidance on the quality of an automatic test suite as development proceeds. This paper utilizes the Software Testing and Reliability Early Warning (STREW) static metric suite to provide a developer with indications of changes and additions to their automated unit test suite and code for added confidence that product quality will be high. Retrospective case studies to assess the utility of using the STREW metrics as a feedback mechanism were performed in academic, open source and industrial environments. The results indicate at statistically significant levels the ability of the STREW metrics to provide feedback on important attributes of an automatic test suite and corresponding code.