A Fortran language system for mutation-based software testing
Software—Practice & Experience
Testing object-oriented systems: models, patterns, and tools
Testing object-oriented systems: models, patterns, and tools
An abstract Monte-Carlo method for the analysis of probabilistic programs
POPL '01 Proceedings of the 28th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Component Software: Beyond Object-Oriented Programming
Component Software: Beyond Object-Oriented Programming
Probabilistic Abstract Interpretation and Statistical Testing
PAPM-PROBMIV '02 Proceedings of the Second Joint International Workshop on Process Algebra and Probabilistic Methods, Performance Modeling and Verification
Abstraction of Expectation Functions Using Gaussian Distributions
VMCAI 2003 Proceedings of the 4th International Conference on Verification, Model Checking, and Abstract Interpretation
Optimal strategies for testing nondeterministic systems
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
Testing from a Nondeterministic Finite State Machine Using Adaptive State Counting
IEEE Transactions on Computers
MuJava: an automated class mutation system: Research Articles
Software Testing, Verification & Reliability
Automated testing of stochastic systems: a statistically grounded approach
Proceedings of the 2006 international symposium on Software testing and analysis
Hi-index | 0.00 |
Software testing research has mostly focused on deterministic software systems so far. In reality, however, randomized software systems (i.e. software systems with random output) also play an important role, e. g. for simulation purposes. Test evaluation is a real problem in that case. In previous work, statistical hypothesis tests have already been used, but test decisions have not been interpreted. Furthermore, those tests have only been applied if theoretic values on the distribution of program outputs had been available and not in case of golden implementations. In the present paper, we propose a general approach on how to apply statistical hypothesis tests in order to test randomized software systems. We exactly determine the confidence gained through these tests. We show that after passing a statistical hypothesis test, it can be guaranteed that at least the tested characteristics of the system under test are correct with a certain probability and accuracy of the result. Our approach is also applicable in case of golden implementations. Therefore, knowledge about the outputs' distribution is not necessary in that situation, which is a great advantage. Two case studies are described that have been conducted in order to assess the proposed approach. One of the case studies is based on a software system for the simulation of stochastic geometric models (among others) that evolved from the GeoStoch research project and is now used at France Télécom R&D, Paris, in order to calculate costs for communication networks and to plan new network structures.