Does Adaptive Random Testing Deliver a Higher Confidence than Random Testing?

  • Authors:
  • Tsong Yueh Chen;Fei-Ching Kuo;Huai Liu;W. Eric Wong

  • Affiliations:
  • -;-;-;-

  • Venue:
  • QSIC '08 Proceedings of the 2008 The Eighth International Conference on Quality Software
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Random testing (RT) is a fundamental software testing technique. Motivated by the rationale that neighbouring test cases tend to cause similar execution behaviours, adaptive random testing (ART) was proposed as an enhancement of RT, which enforces random test cases evenly spread over the input domain. ART has always been compared with RT from the perspective of the failure-detection capability. Previous studies have shown that ART can use fewer test cases to detect the first software failure than RT. In this paper, we aim to compare ART and RT from the perspective of program-based coverage. Our experimental results show that given the same number of test cases, ART normally has a higher percentage of coverage than RT. In conclusion, ART outperforms RT not only in terms of the failure-detection capability, but also in terms of the thoroughness of program-based coverage. Therefore, ART delivers a higher confidence of the software under test than RT even when no failure has been revealed.