Finding the Optimal Balance between Over and Under Approximation of Models Inferred from Execution Logs

  • Authors:
  • Paolo Tonella;Alessandro Marchetto;Cu Duy Nguyen;Yue Jia;Kiran Lakhotia;Mark Harman

  • Affiliations:
  • -;-;-;-;-;-

  • Venue:
  • ICST '12 Proceedings of the 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Models inferred from execution traces (logs) may admit more behaviours than those possible in the real system (over-approximation) or may exclude behaviours that can indeed occur in the real system (under-approximation). Both problems negatively affect model based testing. In fact, over-approximation results in infeasible test cases, i.e., test cases that cannot be activated by any input data. Under-approximation results in missing test cases, i.e., system behaviours that are not represented in the model are also never tested. In this paper we balance over- and under-approximation of inferred models by resorting to multi-objective optimization achieved by means of two search-based algorithms: A multi-objective Genetic Algorithm (GA) and the NSGA-II. We report the results on two open-source web applications and compare the multi-objective optimization to the state-of-the-art KLFA tool. We show that it is possible to identify regions in the Pareto front that contain models which violate fewer application constraints and have a higher bug detection ratio. The Pareto fronts generated by the multi-objective GA contain a region where models violate on average 2% of an application's constraints, compared to 2.8% for NSGA-II and 28.3% for the KLFA models. Similarly, it is possible to identify a region on the Pareto front where the multi-objective GA inferred models have an average bug detection ratio of 110 : 3 and the NSGA-II inferred models have an average bug detection ratio of 101 : 6. This compares to a bug detection ratio of 310928 : 13 for the KLFA tool.