Teaching and training developer-testing techniques and tool support
Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion
GATE: game-based testing environment
Proceedings of the 33rd International Conference on Software Engineering
Database state generation via dynamic symbolic execution for coverage criteria
Proceedings of the Fourth International Workshop on Testing Database Systems
Puzzle-based automatic testing: bringing humans into the loop by solving puzzles
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
Augmented dynamic symbolic execution
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
Hi-index | 0.00 |
Test coverage criteria including boundary-value and logical coverage such as Modified Condition/Decision Coverage (MC/DC) have been increasingly used in safety-critical or mission-critical domains, complementing those more popularly used structural coverage criteria such as block or branch coverage. However, existing automated test-generation approaches often target at block or branch coverage for test generation and selection, and therefore do not support testing against boundary-value coverage or logical coverage. To address this issue, we propose a general approach that uses instrumentation to guide existing test-generation approaches to generate test inputs that achieve boundary-value and logical coverage for the program under test. Our preliminary evaluation shows that our approach effectively helps an approach based on Dynamic Symbolic Execution (DSE) to improve boundary-value and logical coverage of generated test inputs. The evaluation results show 30.5% maximum (23% average) increase in boundary-value coverage and 26% maximum (21.5% average) increase in logical coverage of the subject programs under test using our approach over without using our approach. In addition, our approach improves the fault-detection capability of generated test inputs by 12.5% maximum (11% average) compared to the test inputs generated without using our approach.