Experiments on slicing-based debugging aids
Papers presented at the first workshop on empirical studies of programmers on Empirical studies of programmers
An experimental comparison of the effectiveness of the all-uses and all-edges adequacy criteria
TAV4 Proceedings of the symposium on Testing, analysis, and verification
Experimentation in software engineering: an introduction
Experimentation in software engineering: an introduction
Information Retrieval
DART: directed automated random testing
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
CUTE: a concolic unit testing engine for C
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
Experimental assessment of random testing for object-oriented software
Proceedings of the 2007 international symposium on Software testing and analysis
Randoop: feedback-directed random testing for Java
Companion to the 22nd ACM SIGPLAN conference on Object-oriented programming systems and applications companion
Using acceptance tests as a support for clarifying requirements: A series of experiments
Information and Software Technology
Empirical investigation towards the effectiveness of Test First programming
Information and Software Technology
Random Test Run Length and Effectiveness
ASE '08 Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering
Pex: white box test generation for .NET
TAP'08 Proceedings of the 2nd international conference on Tests and proofs
Mutation-driven generation of unit tests and oracles
Proceedings of the 19th international symposium on Software testing and analysis
IEEE Transactions on Software Engineering
A human study of fault localization accuracy
ICSM '10 Proceedings of the 2010 IEEE International Conference on Software Maintenance
Are automated debugging techniques actually helping programmers?
Proceedings of the 2011 International Symposium on Software Testing and Analysis
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
An Evaluation of Random Testing
IEEE Transactions on Software Engineering
Proceedings of the 34th ACM SIGPLAN conference on Programming language design and implementation
Hi-index | 0.00 |
Automatically generated test cases are usually evaluated in terms of their fault revealing or coverage capability. Beside these two aspects, test cases are also the major source of information for fault localization and fixing. The impact of automatically generated test cases on the debugging activity, compared to the use of manually written test cases, has never been studied before. In this paper we report the results obtained from two controlled experiments with human subjects performing debugging tasks using automatically generated or manually written test cases. We investigate whether the features of the former type of test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on accuracy and efficiency of debugging. The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging.