Formal Methods for Protocol Testing: A Detailed Study
IEEE Transactions on Software Engineering
STOC '91 Proceedings of the twenty-third annual ACM symposium on Theory of computing
TestTube: a system for selective regression testing
ICSE '94 Proceedings of the 16th international conference on Software engineering
Testing nondeterministic message-passing programs with NOPE
SPDT '98 Proceedings of the SIGMETRICS symposium on Parallel and distributed tools
Extreme programming explained: embrace change
Extreme programming explained: embrace change
A practical guide to testing object-oriented software
A practical guide to testing object-oriented software
Software Engineering
Computer-Aided Multivariate Analysis
Computer-Aided Multivariate Analysis
SIGCSE '02 Proceedings of the 33rd SIGCSE technical symposium on Computer science education
Test Driven Development: By Example
Test Driven Development: By Example
ADC '03 Proceedings of the Conference on Agile Development
Optimal strategies for testing nondeterministic systems
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
Pragmatic Unit Testing in C# with NUnit
Pragmatic Unit Testing in C# with NUnit
Testing randomized software by means of statistical hypothesis tests
Fourth international workshop on Software quality assurance: in conjunction with the 6th ESEC/FSE joint meeting
Hi-index | 0.00 |
Automated tests can play a key role in ensuring system quality in software development. However, significant problems arise in automating tests of stochastic algorithms. Normally, developers write tests that simply check whether the actual result is equal to the expected result (perhaps within some tolerance). But for stochastic algorithms, restricting ourselves in this way severely limits the kinds of tests we can write: either to trivial tests, or to fragile and hard-tounderstand tests that rely on a particular seed for a random number generator. A richer and more powerful set of tests is possible if we accommodate tests of statistical properties of the results of running an algorithm many times. The work described in this paper has been done in the context of a real-world application, a large-scale simulation of urban development designed to inform major decisions about land use and transportation. We describe our earlier experience with using automated testing for this system, in which we took a conventional approach, and the resulting difficulties. We then present a statistically based approach for testing stochastic algorithms based on hypothesis testing. Three different ways of constructing such tests are given, which cover the most commonly used distributions. We evaluate these tests in terms of frequency of failing when they should and when they should not, and conclude with guidelines and practical suggestions for implementing such unit tests for other stochastic applications.