Comparing bug finding tools with reviews and tests

  • Authors:
  • Stefan Wagner;Jan Jürjens;Claudia Koller;Peter Trischberger

  • Affiliations:
  • Institut für Informatik, Technische Universität München, Garching, Germany;Institut für Informatik, Technische Universität München, Garching, Germany;Institut für Informatik, Technische Universität München, Garching, Germany;O2 Germany, Munich, Germany

  • Venue:
  • TestCom'05 Proceedings of the 17th IFIP TC6/WG 6.1 international conference on Testing of Communicating Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bug finding tools can find defects in software source code using an automated static analysis. This automation may be able to reduce the time spent for other testing and review activities. For this we need to have a clear understanding of how the defects found by bug finding tools relate to the defects found by other techniques. This paper describes a case study using several projects mainly from an industrial environment that were used to analyse the interrelationships. The main finding is that the bug finding tools predominantly find different defects than testing but a subset of defects found by reviews. However, the types that can be detected are analysed more thoroughly. Therefore, a combination is most advisable if the high number of false positives of the tools can be tolerated.