Automated Software Test Data Generation
IEEE Transactions on Software Engineering
Algorithms on strings, trees, and sequences: computer science and computational biology
Algorithms on strings, trees, and sequences: computer science and computational biology
Symbolic execution and program testing
Communications of the ACM
A Survey of Optimization by Building and Using Probabilistic Models
Computational Optimization and Applications
DART: directed automated random testing
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
Search-based software test data generation: a survey: Research Articles
Software Testing, Verification & Reliability
CUTE: a concolic unit testing engine for C
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
A multi-objective approach to search-based test data generation
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Efficient unit test case minimization
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
Increasing diversity: Natural language measures for software fault prediction
Journal of Systems and Software
Automated Test Data Generation for Coverage: Haven't We Solved This Problem Yet?
TAIC-PART '09 Proceedings of the 2009 Testing: Academic and Industrial Conference - Practice and Research Techniques
ICSTW '10 Proceedings of the 2010 Third International Conference on Software Testing, Verification, and Validation Workshops
A Theoretical and Empirical Study of Search-Based Testing: Local, Global, and Hybrid Search
IEEE Transactions on Software Engineering
Optimised realistic test input generation using web services
SSBSE'12 Proceedings of the 4th international conference on Search Based Software Engineering
An orchestrated survey of methodologies for automated software test case generation
Journal of Systems and Software
Hi-index | 0.00 |
Due to the frequent non-existence of an automated oracle, test cases are often evaluated manually in practice. However, this fact is rarely taken into account by automatic test data generators, which seek to maximise a program's structural coverage only. The test data produced tends to be of a poor fit with the program's operational profile. As a result, each test case takes longer for a human to check, because the scenarios that arbitrary-looking data represent require time and effort to understand. This short paper proposes methods to extracting knowledge from programmers, source code and documentation and its incorporation into the automatic test data generation process so as to inject the realism required to produce test cases that are quick and easy for a human to comprehend and check. The aim is to reduce the so-called qualitative human oracle costs associated with automatic test data generation. The potential benefits of such an approach are demonstrated with a simple case study.