The role of laboratory experiments in HCI: help, hindrance, or ho-hum?
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Experimental evaluation in computer science: a quantitative study
Journal of Systems and Software
Experimental software engineering: a report on the state of the art
Proceedings of the 17th international conference on Software engineering
The role of experimentation in software engineering: past, current, and future
Proceedings of the 18th international conference on Software engineering
WSC '86 Proceedings of the 18th conference on Winter simulation
Extreme programming explained: embrace change
Extreme programming explained: embrace change
Empirical studies of software engineering: a roadmap
Proceedings of the Conference on The Future of Software Engineering
ACM President's Letter: What is experimental computer science?
Communications of the ACM
Conducting experiments on software evolution
IWPSE '01 Proceedings of the 4th International Workshop on Principles of Software Evolution
Experimental design for simulation: experimental design for simulation
Proceedings of the 35th conference on Winter simulation: driving innovation
Communications of the ACM - Transforming China
Proceedings of the 5th international workshop on Software and performance
Handbook of Human Factors and Ergonomics
Handbook of Human Factors and Ergonomics
On the success of empirical studies in the international conference on software engineering
Proceedings of the 28th international conference on Software engineering
Hi-index | 0.00 |
Computer scientists and software engineers seldom rely on using experimental methods despite frequent calls to do so. The problem may lie with the shortcomings of traditional experimental methods. We introduce a new form of experimental designs, synthetic designs, which address these shortcomings. Compared with classical experimental designs (between-subjects, within-subjects, and matched-subjects), synthetic designs can offer substantial reductions in sample sizes, cost, time and effort expended, increased statistical power, and fewer threats to validity (internal, external, and statistical conclusion). This new design is a variation of within-subjects design in which each system user serves in only a single treatment condition. System performance scores for all other treatment conditions are derived synthetically without repeated testing of each subject. This design, though not applicable in all situations, can be used in the development and testing of some computer systems provided that user behavior is unaffected by the version of computer system being used. We justify synthetic designs on three grounds: this design has been used successfully in the development of computerized mug shot systems, showing marked advantages over traditional designs; a detailed comparison with traditional designs showing their advantages on 17 of the 18 criteria considered; and an assessment showing these designs satisfy all the requirements of true experiments (albeit in a novel way).