Orthogonal Latin squares: an application of experiment design to compiler testing
Communications of the ACM
Statistical tools for simulation practitioners
Statistical tools for simulation practitioners
Experimental designs for simulation
WSC '94 Proceedings of the 26th conference on Winter simulation
Applying design of experiments to software testing: experience report
ICSE '97 Proceedings of the 19th international conference on Software engineering
The AETG System: An Approach to Testing Based on Combinatorial Design
IEEE Transactions on Software Engineering
Quality Engineering Using Robust Design
Quality Engineering Using Robust Design
Simulation Modeling and Analysis
Simulation Modeling and Analysis
Design of experiments: designing simulation experiments
Proceedings of the 33nd conference on Winter simulation
Software Metrics: A Rigorous and Practical Approach
Software Metrics: A Rigorous and Practical Approach
Experimental designs in software engineering: d-optimal designs and covering arrays
Proceedings of the 2004 ACM workshop on Interdisciplinary software engineering research
Test prioritization for pairwise interaction coverage
A-MOST '05 Proceedings of the 1st international workshop on Advances in model-based testing
A Framework for Design Tradeoffs
Software Quality Control
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
A survey of combinatorial testing
ACM Computing Surveys (CSUR)
Verification of general and cyclic covering arrays using grid computing
Globe'10 Proceedings of the Third international conference on Data management in grid and peer-to-peer systems
Advances in Software Engineering - Special issue on software test automation
The Minimal Failure-Causing Schema of Combinatorial Testing
ACM Transactions on Software Engineering and Methodology (TOSEM)
Hi-index | 0.00 |
Performance of computer-based systems may depend on many different factors, internal and external. In order to design a system to have the desired performance or to validate that the system has the required performance, the effect of the influencing factors must be known. Common methods give no or little guidance on how to vary the factors during prototyping or validation. Varying the factors in all possible combinations would be too expensive and too time-consuming. This paper introduces a systematic approach to the prototyping and the validation of a system's performance, by treating the prototyping or validation as an experiment, in which the fractional factorial design methodology is commonly used. To show that this is possible, a case study evaluating the influencing factors of the false and real target rate of a radar system is described. Our findings show that prototyping and validation of system performance become structured and effective when using the fractional factorial design. The methodology enables planning, performance, structured analysis, and gives guidance for appropriate test cases. The methodology yields not only main factors, but also interacting factors. The effort is minimized for finding the results, due to the methodology. The case study shows that after 112 test cases, of 1,024 possible, the knowledge gained was enough to draw conclusions on the effects and interactions of 10 factors. This is a reduction with a factor 5-9 compared to alternative methods.