A methodology for controlling the size of a test suite
ACM Transactions on Software Engineering and Methodology (TOSEM)
A safe, efficient regression test selection technique
ACM Transactions on Software Engineering and Methodology (TOSEM)
Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria
ICSE '94 Proceedings of the 16th international conference on Software engineering
Empirical Studies of a Safe Regression Test Selection Technique
IEEE Transactions on Software Engineering
Experimentation in software engineering: an introduction
Experimentation in software engineering: an introduction
An empirical study of regression test selection techniques
ACM Transactions on Software Engineering and Methodology (TOSEM)
Prioritizing Test Cases For Regression Testing
IEEE Transactions on Software Engineering
Test Case Prioritization: A Family of Empirical Studies
IEEE Transactions on Software Engineering
Effectively prioritizing tests in development environment
ISSTA '02 Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis
Visualization of test information to assist fault localization
Proceedings of the 24th International Conference on Software Engineering
Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage
IEEE Transactions on Software Engineering
Bug isolation via remote program sampling
PLDI '03 Proceedings of the ACM SIGPLAN 2003 conference on Programming language design and implementation
Locating causes of program failures
Proceedings of the 27th international conference on Software engineering
CUTE: a concolic unit testing engine for C
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
SOBER: statistical model-based bug localization
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
Empirical Software Engineering
Test Suite Reduction with Selective Redundancy
ICSM '05 Proceedings of the 21st IEEE International Conference on Software Maintenance
ICSM '05 Proceedings of the 21st IEEE International Conference on Software Maintenance
ICSM '05 Proceedings of the 21st IEEE International Conference on Software Maintenance
Automatic test factoring for java
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
Empirical evaluation of the tarantula automatic fault-localization technique
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
A similarity-aware approach to testing based fault localization
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
Automated path generation for software fault localization
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
Improving test suites for efficient fault localization
Proceedings of the 28th international conference on Software engineering
Testing-based interactive fault localization
Proceedings of the 28th international conference on Software engineering
TimeAware test suite prioritization
Proceedings of the 2006 international symposium on Software testing and analysis
Tool-assisted unit-test generation and selection based on operational abstractions
Automated Software Engineering
Towards Interactive Fault Localization Using Test Information
APSEC '06 Proceedings of the XIII Asia Pacific Software Engineering Conference
Improving Fault Detection Capability by Selectively Retaining Test Cases during Test Suite Reduction
IEEE Transactions on Software Engineering
An empirical study of the effects of test-suite reduction on fault localization
Proceedings of the 30th international conference on Software engineering
On similarity-awareness in testing-based fault localization
Automated Software Engineering
Using Redundancies to Find Errors
IEEE Transactions on Software Engineering
VIDA: Visual interactive debugging
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
Eclat: automatic generation and classification of test inputs
ECOOP'05 Proceedings of the 19th European conference on Object-Oriented Programming
Accurately choosing execution runs for software fault localization
CC'06 Proceedings of the 15th international conference on Compiler Construction
Non-parametric statistical fault localization
Journal of Systems and Software
Injecting mechanical faults to localize developer faults for evolving software
Proceedings of the 2013 ACM SIGPLAN international conference on Object oriented programming systems languages & applications
Is this a bug or an obsolete test?
ECOOP'13 Proceedings of the 27th European conference on Object-Oriented Programming
A test-suite reduction approach to improving fault-localization effectiveness
Computer Languages, Systems and Structures
Hi-index | 0.00 |
Testing-based fault-localization (TBFL) approaches often require the availability of high-statement-coverage test suites that sufficiently exercise the areas around the faults. However, in practice, fault localization often starts with a test suite whose quality may not be sufficient to apply TBFL approaches. Recent capture/replay or traditional test-generation tools can be used to acquire a high-statement-coverage test collection (i.e., test inputs only) without expected outputs. But it is expensive or even infeasible for developers to manually inspect the results of so many test inputs. To enable practical application of TBFL approaches, we propose three strategies to reduce the test inputs in an existing test collection for result inspection. These three strategies are based on the execution traces of test runs using the test inputs. With the three strategies, developers can select only a representative subset of the test inputs for result inspection and fault localization. We implemented and applied the three test-input-reduction strategies to a series of benchmarks: the Siemens programs, DC, and TCC. The experimental results show that our approach can help developers inspect the results of a smaller subset (less than 10%) of test inputs, whose fault-localization effectiveness is close to that of the whole test collection.