Feedback-Directed Random Test Generation

  • Authors:
  • Carlos Pacheco;Shuvendu K. Lahiri;Michael D. Ernst;Thomas Ball

  • Affiliations:
  • MIT CSAIL;Microsoft Research;MIT CSAIL;Microsoft Research

  • Venue:
  • ICSE '07 Proceedings of the 29th international conference on Software Engineering
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation.