Effectively Combining Software Verification Strategies: Understanding Different Assumptions

  • Authors:
  • David Owen;Dejan Desovski;Bojan Cukic

  • Affiliations:
  • West Virginia University, Morgantown, WV;West Virginia University, Morgantown, WV;West Virginia University, Morgantown, WV

  • Venue:
  • ISSRE '06 Proceedings of the 17th International Symposium on Software Reliability Engineering
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe an experiment in which inconsistent results between two tools for testing formal models (and a third used to determine which of the two was correct) led us to a more careful look at the way each tool was being used and a clearer understanding of the output of the tools. For the experiment, we created error-seeded versions of an SCR specification representing a real-world personnel access control system. They were checked using the model checker SPIN and Lurch, our random testing tool for finite-state models. In one case a property violation was detected by Lurch, an incomplete tool, but missed by SPIN, a model checking tool designed for complete verification. We used the SCR Toolset and the Salsa invariant checker to determine that the violation detected by Lurch was indeed present in the specification. We then looked more carefully at how we were using SPIN in conjunction with the SCR Toolset and, eventually, made adjustments so that SPIN also detected the property violation initially detected only by Lurch. Once it was clear the tools were being used correctly and would give consistent results, we did an experiment to determine how they could be combined to optimize completeness and efficiency. We found that combining tools made it possible to verify the specifications faster and with much less memory in most cases.