IEEE Transactions on Software Engineering - Special issue on formal methods in software practice
Software unit test coverage and adequacy
ACM Computing Surveys (CSUR)
Automated Software Engineering
Dynamically inferring temporal properties
Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering
Considering Circuit Observability Don't Cares in CNF Satisfiability
Proceedings of the conference on Design, Automation and Test in Europe - Volume 2
DART: directed automated random testing
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
CUTE: a concolic unit testing engine for C
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
SAT sweeping with local observability don't-cares
Proceedings of the 43rd annual Design Automation Conference
EXE: automatically generating inputs of death
Proceedings of the 13th ACM conference on Computer and communications security
A Test Data Generation Tool for Unit Testing of C Programs
QSIC '06 Proceedings of the Sixth International Conference on Quality Software
Kodkod: a relational model finder
TACAS'07 Proceedings of the 13th international conference on Tools and algorithms for the construction and analysis of systems
Fault localization based on information flow coverage
Software Testing, Verification & Reliability
Hi-index | 0.00 |
Coverage metrics answer the question of whether we adequately checked a given software artifact. For example, statement coverage metrics measure how many and how often lines of code were executed. Path coverage metrics measure the frequency of execution of interleaving branches of code. In recent years, researchers have introduced several effective static analysis techniques for checking software artifacts. Consequently, more and more developers started embedding properties in code. Also, some techniques and tools emerged that automatically infer system properties where they do not explicitly exist. We hypothesize that it is often more effective to evaluate test suites based on their coverage of system properties than than of structural program elements. In this paper, we present a novel coverage criterion and metrics that evaluate test cases with respect to their coverage of properties, and measure the completeness of the properties themselves.