Improving unfamiliar code with unit tests: an empirical investigation on tool-supported and human-based testing

  • Authors:
  • Dietmar Winkler;Martina Schmidt;Rudolf Ramler;Stefan Biffl

  • Affiliations:
  • Institute of Software Technology, Christian Doppler Laboratory "Software Engineering Integration for Flexible Automation Systems" (CDL-Flex), Vienna University of Technology, Vienna, Austria;Institute of Software Technology, Christian Doppler Laboratory "Software Engineering Integration for Flexible Automation Systems" (CDL-Flex), Vienna University of Technology, Vienna, Austria;Software Competence Center Hagenberg, Hagenberg, Austria;Institute of Software Technology, Christian Doppler Laboratory "Software Engineering Integration for Flexible Automation Systems" (CDL-Flex), Vienna University of Technology, Vienna, Austria

  • Venue:
  • PROFES'12 Proceedings of the 13th international conference on Product-Focused Software Process Improvement
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Software testing is a well-established approach in modern software engineering practice to improve software products by systematically introducing unit tests on different levels during software development projects. Nevertheless existing software solutions often suffer from a lack of unit tests which have not been implemented during development because of time restrictions and/or resource limitations. A lack of unit tests can hinder effective and efficient maintenance processes. Introducing unit tests after deployment is a promising approach for (a) enabling systematic and automation-supported tests after deployment and (b) increasing product quality significantly. An important question is whether unit tests should be introduced manually by humans or automatically generated by tools. This paper focuses on an empirical investigation of tool-supported and human-based unit testing in a controlled experiment with focus on defect detection effectiveness, false positives, and test coverage of two different testing approaches applied to unfamiliar source code. Main results were that (a) individual testing approaches (human-based and tool-supported testing) showed advantages for different defect classes, (b) tools delivered a higher number of false positives, and (c) higher test coverage.