What went wrong: explaining counterexamples

  • Authors:
  • Alex Groce;Willem Visser

  • Affiliations:
  • Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA;RIACS, NASA Ames Research Center, Moffett Field, CA

  • Venue:
  • SPIN'03 Proceedings of the 10th international conference on Model checking software
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple versions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes identification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and nonfailing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail.