Evaluation of safety-critical software
Communications of the ACM
Compound-Poisson Software Reliability Model
IEEE Transactions on Software Engineering
Faults on its sleeve: amplifying software reliability testing
ISSTA '93 Proceedings of the 1993 ACM SIGSOFT international symposium on Software testing and analysis
A Markov Chain Model for Statistical Software Testing
IEEE Transactions on Software Engineering
Statistical testing of software based on a usage model
Software—Practice & Experience
Some Conservative Stopping Rules for the Operational Testing of Safety-Critical Software
IEEE Transactions on Software Engineering
A Binary Markov Process Model for Random Testing
IEEE Transactions on Software Engineering
Systems testing and statistical test data coverage
COMPSAC '97 Proceedings of the 21st International Computer Software and Applications Conference
Confidence-Based Reliability And Statistical Coverage Estimation
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
High quality behavioral verification using statistical stopping criteria
Proceedings of the conference on Design, automation and test in Europe
Stopping Criteria Comparison: Towards High Quality Behavioral Verification
ISQED '01 Proceedings of the 2nd International Symposium on Quality Electronic Design
High Assurance Software Testing In Business And DOD
Journal of Integrated Design & Process Science
EUC'06 Proceedings of the 2006 international conference on Embedded and Ubiquitous Computing
Hi-index | 0.00 |
When designing a system in the behavioral level, one of the most important steps to be taken is verifying its functionality before it is released to the logic/PD design phase. One may consider behavioral models as oracles in industries to test against when the final chip is produced. In this work, we use branch coverage as a measure for the quality of verifying/testing behavioral models. Minimum effort for achieving a given quality level can be realized by using the proposed stopping rule. The stopping rule guides the process to switch to a different testing strategy using different types of patterns, i.e. random vs. functional, or using different set of parameters to generate patterns/test cases, when the current strategy is expected not to increase the coverage. We demonstrate the use of the stopping rule on two complex behavioral level VHDL models that were tested for branch coverage with 4 different testing phases. We compare savings of the number of applied testing patterns and quality of testing both with and without using the stopping rule, and show that switching phases at certain points guided by the stopping rule would yield to the same or even better coverage with less number of testing patterns.