Explaining Counterexamples Using Causality

  • Authors:
  • Ilan Beer;Shoham Ben-David;Hana Chockler;Avigail Orni;Richard Trefler

  • Affiliations:
  • IBM Research Mount Carmel, Haifa, Israel 31905;David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada;IBM Research Mount Carmel, Haifa, Israel 31905;IBM Research Mount Carmel, Haifa, Israel 31905;David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada

  • Venue:
  • CAV '09 Proceedings of the 21st International Conference on Computer Aided Verification
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a non-trivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of causality , introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomial-time algorithm that approximates it. This algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where these visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a light-weight external layer to any model checking tool, or used to explain simulation traces.