Graph-Based Algorithms for Boolean Function Manipulation
IEEE Transactions on Computers
GRASP—a new search algorithm for satisfiability
Proceedings of the 1996 IEEE/ACM international conference on Computer-aided design
Coverage estimation for symbolic model checking
Proceedings of the 36th annual ACM/IEEE Design Automation Conference
Chaff: engineering an efficient SAT solver
Proceedings of the 38th annual Design Automation Conference
Symbolic Model Checking
Verification through the principle of least astonishment
Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design
Estimating functional coverage in bounded model checking
Proceedings of the conference on Design, automation and test in Europe
Inferno: streamlining verification with inferred semantics
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Scalable specification mining for verification and diagnosis
Proceedings of the 47th Design Automation Conference
Automated feature localization for hardware designs using coverage metrics
Proceedings of the 49th Annual Design Automation Conference
Formal methods for ranking counterexamples through assumption mining
DATE '12 Proceedings of the Conference on Design, Automation and Test in Europe
Hi-index | 0.00 |
The design of complex systems is largely ruled by the time needed for verification. Even though formal methods can provide higher reliability, in practice often simulation based verification is used. Large testbenches are created and if the design produces the correct output for all stimuli it is said to be correct. But there is no guarantee that the testbench is complete in the sense that it contains test-cases for all "important" situations.We opose an approach to detect "gaps" in testbenches, i.e. behavior that is not tested. The approach relies on automatic generation of properties from the testbench in terms of a formal property language. By construction the properties are valid within the testbench. A model checker proves the validity of the property on the design. If this proof succeeds, the testbench covers all possible situations for given signals. In case of failure counter-examples are produced. These counter-examples represent behavior that is not tested, i.e. a gap in the testbench. The feasibility of the approach is underlined by experiments.