Model checking and abstraction
ACM Transactions on Programming Languages and Systems (TOPLAS)
Model checking
Directed explicit model checking with HSF-SPIN
SPIN '01 Proceedings of the 8th international SPIN workshop on Model checking of software
Enhancing random walk state space exploration
Proceedings of the 10th international workshop on Formal methods for industrial critical systems
UPPAAL/DMC: abstraction-based heuristics for directed model checking
TACAS'07 Proceedings of the 13th international conference on Tools and algorithms for the construction and analysis of systems
Experiments with multiple abstraction heuristics in symbolic verification
SARA'05 Proceedings of the 6th international conference on Abstraction, Reformulation and Approximation
TACAS'05 Proceedings of the 11th international conference on Tools and Algorithms for the Construction and Analysis of Systems
Abstraction-Guided model checking using symbolic IDA* and heuristic synthesis
FORTE'05 Proceedings of the 25th IFIP WG 6.1 international conference on Formal Techniques for Networked and Distributed Systems
Hi-index | 0.00 |
In software development, formal verification and simulation are seen as complimentary paradigms: the former can guarantee the correctness of systems with respect to properties, but does not scale; the latter does scale but cannot guarantee the absent of errors. In the authors' previous work, a mechanism of statically analysing a model has been used to build an abstraction of the original model, which in turn is used to guide a heuristic search in a guided model checker. We extend that work and apply the same technique to build a heuristically-driven, or guided, random-walk model checker. This work sits at the intersection of a number of research areas: model checking, random walks, heuristic search and simulation. Novel here is the use of a heuristic mechanism to guide the random walk towards states of the model that possibly violate user-defined properties, and the use of an automatic abstraction scheme to build the heuristic. In a series of experiments, we compare the performance of our guided, random-walk based tool to standard model-checking tools. A new metric that we call Process Error Participation (PEP) has also been devised to classify model behaviour.