From Testing to Diagnosis: An Automated Approach
Proceedings of the 19th IEEE international conference on Automated software engineering
Automated Test Data Generation using Search Based Software Engineering
AST '07 Proceedings of the Second International Workshop on Automation of Software Test
Search-based failure discovery using testability transformations to generate pseudo-oracles
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
An empirical investigation into branch coverage for C programs using CUTE and AUSTIN
Journal of Systems and Software
Traceability for mutation analysis in model transformation
MODELS'10 Proceedings of the 2010 international conference on Models in software engineering
Test data regeneration: generating new test data from existing test data
Software Testing, Verification & Reliability
Object-Oriented testing capabilities and performance evaluation of the c# mutation system
CEE-SET'09 Proceedings of the 4th IFIP TC 2 Central and East European conference on Advances in Software Engineering Techniques
Science of Computer Programming
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
The level of confidence in a software component is often linked to the quality of its test cases. This quality can in turn be evaluated with mutation analysis: faults are injected into the software component (making mutants of it) to check the proportion of mutants detected (‘killed’) by the test cases. But while the generation of a set of basic test cases is easy, improving its quality may require prohibitive effort. This paper focuses on the issue of automating the test optimization. The application of genetic algorithms would appear to be an interesting way of tackling it. The optimization problem is modelled as follows: a test case can be considered as a predator while a mutant program is analogous to a prey. The aim of the selection process is to generate test cases able to kill as many mutants as possible, starting from an initial set of predators, which is the test cases set provided by the programmer. To overcome disappointing experimentation results, on .Net components and unit Eiffel classes, a slight variation on this idea is studied, no longer at the ‘animal’ level (lions killing zebras, say) but at the bacteriological level. The bacteriological level indeed better reflects the test case optimization issue: it mainly differs from the genetic one by the introduction of a memorization function and the suppression of the crossover operator. The purpose of this paper is to explain how the genetic algorithms have been adapted to fit with the issue of test optimization. The resulting algorithm differs so much from genetic algorithms that it has been given another name: bacteriological algorithm. Copyright © 2005 John Wiley & Sons, Ltd.Based on ‘Genes and bacteria for automatic test cases optimization in the .NET environment’ by Benoit Baudry, Frank Fleurey, Jean-Marc Jézéquel and Yves Le Traon which appeared in Proceedings of the International Symposium on Software Reliability Engineering, Annapolis, MD, November 2002, pp. 195–206 [1]. © 2002 IEEE. This revised and expanded version appears here with the permission of the IEEE.