Conceptual Modeling of Coincident Failures in Multiversion Software
IEEE Transactions on Software Engineering
Recent advances in software measurement (abstract and references for talk)
ICSE '90 Proceedings of the 12th international conference on Software engineering
A semantic model of program faults
ISSTA '96 Proceedings of the 1996 ACM SIGSOFT international symposium on Software testing and analysis
Modeling software design diversity: a review
ACM Computing Surveys (CSUR)
Harnessing web-based application similarities to aid in regression testing
ISSRE'09 Proceedings of the 20th IEEE international conference on software reliability engineering
Hi-index | 0.00 |
This paper describes an experiment in which simple syntactic alterations were introduced into program text in order to evaluate the testing strategy known as error seeding. The experiment's goal was to determine if randomly placed syntactic manipulations can produce failure characteristics similar to those of indigenous errors found within unseeded programs. As a result of a separate experiment, several programs were available, all of which were written to the same specifications and thus were intended to be functionally equivalent. The use of functionally equivalent programs allowed the influence of individual programmer styles to be removed as a variable from the error seeding experiment. Each of six different syntactic manipulations were introduced into each program and the mean times to failure for the seeded errors were observed. The seeded errors were found to have a broad spectrum of mean times to failure independent of the syntactic alteration used. We conclude that it is possible to seed errors using only simple syntactic techniques that are arbitrarily difficulty to locate. In addition, several unexpected results indicate that some issues involved in error seeding have not been addressed previously.