Bad pairs in software testing

  • Authors:
  • Daniel Hoffman;Chien-Hsing Chang;Gary Bazdell;Brett Stevens;Kevin Yoo

  • Affiliations:
  • University of Victoria, Dept. of Computer Science, Victoria, BC, Canada;University of Victoria, Dept. of Computer Science, Victoria, BC, Canada;Carleton University, Dept. of Math. and Statistics, Ottawa, ON, Canada;Carleton University, Dept. of Math. and Statistics, Ottawa, ON, Canada;Wurldtech Security Technologies Inc., Vancouver, BC, Canada

  • Venue:
  • TAIC PART'10 Proceedings of the 5th international academic and industrial conference on Testing - practice and research techniques
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

With pairwise testing, the test model is a list of N parameters. Each test case is an N-tuple; the test space is the cross product of the N parameters. A pairwise test is a set of N-tuples where every pairwise combination of the parameter values is contained in at least one of the N-tuples. Well-known algorithms generate pairwise test sets far smaller than the test space. Pairwise testing has good tool support and is widely known in industry and academia. Empirical results have shown the effectiveness of the approach. While pairwise testing is used to generate test inputs, we propose a novel analysis of the test outputs. We focus on bad pairs: those which always result in a failed test case. We experimentally evaluate the frequency of occurrence of bad pairs using mutation testing with 1 and 2 faults per mutant. The results provide useful insights into two important relationships: (1) between faults and bad pairs and (2) between input selection and bad pairs. We then apply the approach to an industrial example in network vulnerability testing. We also present error-locating arrays, a recent theoretical result providing a powerful tool for bad pairs analysis.