An experimental evaluation of the assumption of independence in multiversion programming
IEEE Transactions on Software Engineering
Information and Software Technology - Software quality assurance
Regular expressions for language engineering
Natural Language Engineering
Sourcerer: a search engine for open source code supporting structure-based search
Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications
Continuous Integration: Improving Software Quality and Reducing Risk (The Addison-Wesley Signature Series)
Introduction to Software Testing
Introduction to Software Testing
Code Conjurer: Pulling Reusable Software out of Thin Air
IEEE Software
Search-enhanced testing (NIER track)
Proceedings of the 33rd International Conference on Software Engineering
Automated creation and assessment of component adapters with test cases
CBSE'10 Proceedings of the 13th international conference on Component-Based Software Engineering
Hi-index | 0.00 |
Automating software testing can significantly reduce the time and effort required to assure the quality of software systems, and over recent years significant strides have been made in test automation techniques. However, one aspect of software testing that has always resisted full automation is the determination of the expected results for given system states and input values - the so called "oracle problem". Fortunately, the recent advent of a new generation of software search engines containing millions of reusable software artifacts offers an elegant solution to this dilemma. Once a search engine is able to deliver multiple results that conform to a given specification (by searching for and adapting preexisting components), multi-version testing of software with "harvested" oracles becomes a feasible alternative to manual oracle definition. In this paper we present an approach to Search-Enhanced Testing with a focus on the discovery of discrepancies between the results returned by harvested test oracles and a Component Under Test for randomly generated test invocations. Our current research focuses on validating the hypothesis that human test engineers will find more defects when analyzing such automatically discovered discrepancies than when developing test cases using traditional coverage criteria.