Specification-based test oracles for reactive systems
ICSE '92 Proceedings of the 14th international conference on Software engineering
Markov analysis of software specifications
ACM Transactions on Software Engineering and Methodology (TOSEM)
Randomized algorithms
Automated consistency checking of requirements specifications
ACM Transactions on Software Engineering and Methodology (TOSEM)
Using model checking to generate tests from requirements specifications
ESEC/FSE-7 Proceedings of the 7th European software engineering conference held jointly with the 7th ACM SIGSOFT international symposium on Foundations of software engineering
Automatically Checking an Implementation against Its Formal Specification
IEEE Transactions on Software Engineering
Model checking
Proving Invariants of I/O Automata with TAME
Automated Software Engineering
Automatically Generating Test Data from a Boolean Specification
IEEE Transactions on Software Engineering
Program Synthesis from Formal Requirements Specifications Using APTS
Higher-Order and Symbolic Computation
Heuristic Model Checking for Java Programs
Proceedings of the 9th International SPIN Workshop on Model Checking of Software
NuSMV 2: An OpenSource Tool for Symbolic Model Checking
CAV '02 Proceedings of the 14th International Conference on Computer Aided Verification
Automated Validation of Software Models
Proceedings of the 16th IEEE international conference on Automated software engineering
Design and Implementation of a Fine-Grained Software Inspection Tool
IEEE Transactions on Software Engineering
Is mutation an appropriate tool for testing experiments?
Proceedings of the 27th international conference on Software engineering
DART: directed automated random testing
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
Software Model Checking: The VeriSoft Approach
Formal Methods in System Design
FLAVERS: a finite state verification technique for software systems
IBM Systems Journal
An empirical evaluation of a language-based security testing technique
CASCON '09 Proceedings of the 2009 Conference of the Center for Advanced Studies on Collaborative Research
Hi-index | 0.01 |
This paper presents a methodology for random testing of software models. Random testing tools can be used very effectively early in the modeling process, e.g., while writing formal requirements specification for a given system. In this phase users cannot know whether a correct operational model is being built or whether the properties that the model must satisfy are correctly identified and stated. So it is very useful to have tools to quickly identify errors in the operational model or in the properties, and make appropriate corrections. Using Lurch, our random testing tool for finite-state models, we evaluated the effectiveness of random model testing by detecting manually seeded errors in an SCR specification of a real-world personnel access control system. Having detected over 80% of seeded errors quickly, our results appear to be very encouraging. We further defined and measured test coverage metrics with the goal of understanding why some of the mutants were not detected. Coverage measures allowed us to understand the pitfalls of random testing of formal models, thus providing opportunities for future improvement.