User defined coverage—a tool supported methodology for design verification
DAC '98 Proceedings of the 35th annual Design Automation Conference
Micro architecture coverage directed generation of test programs
Proceedings of the 36th annual ACM/IEEE Design Automation Conference
Improvements in Coverability Analysis
FME '02 Proceedings of the International Symposium of Formal Methods Europe on Formal Methods - Getting IT Right
Probabilistic regression suites for functional verification
Proceedings of the 41st annual Design Automation Conference
Applications of synchronization coverage
Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming
A probabilistic alternative to regression suites
Theoretical Computer Science
Evaluating workloads using multi-comparative functional coverage
HVC'09 Proceedings of the 5th international Haifa verification conference on Hardware and software: verification and testing
Reaching coverage closure in post-silicon validation
HVC'10 Proceedings of the 6th international conference on Hardware and software: verification and testing
Proceedings of the 48th Design Automation Conference
HVC'11 Proceedings of the 7th international Haifa Verification conference on Hardware and Software: verification and testing
Hi-index | 0.00 |
Testing is one of the biggest problems of the software industry. Coverage is the main technique for showing that the testing has been thorough. Coverage can be used to find a good regression suite, i.e. a set of tests that is run on the application after sofware or data changes in order to check that no new bugs were introduced. This paper is about the experience gained in IBM Haifa Research Lab (HRL) in creating regression suites and minimizing their size, while maintaining high quality as measured in coverage. The problem we solve, while similar to the one addressed in the literature, has a key difference; the compaction algorithm is implemented online due to the large number of tests processed. We compare strategies for implementing online set-cover. The trade-offs are between the solution quality (as expressed by the size of the cover), the size of the intermediate sets and the computational resources. We show that it is possible to start discarding tests very early without getting a significantly larger final set.