Tightening test coverage metrics: a case study in equivalence checking using k-induction

  • Authors:
  • Alastair F. Donaldson;Nannan He;Daniel Kroening;Philipp Rümmer

  • Affiliations:
  • Computer Science Department, Oxford University, UK;Computer Science Department, Oxford University, UK;Computer Science Department, Oxford University, UK;Department of Information Technology, Uppsala University, Uppsala, Sweden

  • Venue:
  • FMCO'10 Proceedings of the 9th international conference on Formal Methods for Components and Objects
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a case study applying the k -induction method to equivalence checking of Simulink designs. In particular, we are interested in the problem of equivalence detection in mutation-based testing: given a design S , determining whether a "mutant" design S ′ derived from S by syntactic fault injection is behaviourally equivalent to S . In this situation, efficient equivalence checking techniques are needed to avoid redundant and expensive search for test cases that observe differences between S and S ′. We have integrated k -induction into our test case generation framework for Simulink. We show, using a selection of benchmarks, that k -induction can be effective in detecting equivalent mutants, sometimes as a stand-alone technique, and sometimes with some manual assistance. We further discuss how the level of automation of the method can be increased by using static analysis to derive strengthening invariants from the structure of the Simulink models.