Automated Test Data Generation for Coverage: Haven't We Solved This Problem Yet?

  • Authors:
  • Kiran Lakhotia;Phil McMinn;Mark Harman

  • Affiliations:
  • -;-;-

  • Venue:
  • TAIC-PART '09 Proceedings of the 2009 Testing: Academic and Industrial Conference - Practice and Research Techniques
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Whilst there is much evidence that both concolic and search based testing can outperform random testing, there has been little work demonstrating the effectiveness of either technique with complete real world software applications.As a consequence, many researchers have doubts not only about the scalability of both approaches but also their applicability to production code. This paper performs an empirical study applying a concolic tool, CUTE, and a search based tool, AUSTIN, to the source code of four large open source applications.Each tool is applied `out of the box'; that is without writing additional code for special handling of any of the individual subjects, or by tuning the tools' parameters.Perhaps surprisingly, the results show that both tools can only obtain at best a modest level of code coverage.Several challenges remain for improving automated test data generators in order to achieve higher levels of code coverage.