An empirical study of regression test application frequency: Research Articles

  • Authors:
  • Jung-Min Kim;Adam Porter;Gregg Rothermel

  • Affiliations:
  • Hyundai Information Technology, Mabuk 1-8, Gusung, Yongin, Kyonggi-do, South Korea #449-910;Computer Science Department, University of Maryland, College Park, MD 20742, U.S.A.;Department of Computer Science and Engineering, University of Nebraska-Lincoln, 360 Avery Hall, Lincoln, NE 68588, U.S.A.

  • Venue:
  • Software Testing, Verification & Reliability
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Regression testing is an expensive process used to revalidate modified software. Regression test selection (RTS) techniques reduce the cost of regression testing by selecting a subset of a test suite. Many RTS techniques have been proposed, and studies have shown that they produce savings; other studies have shown that their cost-effectiveness varies with characteristics of the workloads to which they are applied. It seems plausible, however, that another factor that impacts RTS techniques involves the process by which they are applied. In particular, issues such as the frequency with which regression testing is performed affect the techniques. Thus, in earlier work an experiment was conducted to assess the effects of test application frequency on the cost-effectiveness of RTS techniques. The results exposed tradeoffs to consider when using these techniques over a series of releases. This work, however, was limited in external validity; in particular, the programs studied were relatively small. Thus, the previous experiment has been replicated on a large, multi-version program. This second experiment confirms the findings of the first. In particular, the cost of using safe RTS techniques was strongly and negatively affected by testing interval. Conversely, the effectiveness of minimization RTS techniques was strongly and positively affected. Copyright © 2005 John Wiley & Sons, Ltd.