Mental representations of programs by novices and experts
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Replicating Software Engineering Experiments: Addressing the Tacit Knowledge Problem
ISESE '02 Proceedings of the 2002 International Symposium on Empirical Software Engineering
Mental models and programming aptitude
Proceedings of the 12th annual SIGCSE conference on Innovation and technology in computer science education
On the difficulty of replicating human subjects studies in software engineering
Proceedings of the 30th international conference on Software engineering
Using differences among replications of software engineering experiments to gain knowledge
ESEM '09 Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
Hi-index | 0.00 |
Background. Literal or theoretical replications are important to evaluate and assess empirical results. However, there are still few replications in software engineering, and fewer external replications, i.e., developed by researchers other than the original ones. Aim. This paper discusses the difficulties found and the lessons learned from performing two literal replications of an experiment involving human subjects. Results. Our results apparently contradict the conclusions of the original experiment. However, several differences in context made it difficult to achieve valid comparability. Conclusion. Experiments involving human subjects should collect and report as many qualitative context information as possible, so the results can be related to the conditions under which the hypothesis were found to be true. Besides, given the difficulties found in this study, literal replication does not seem to be the best strategy for experiments involving human subjects in software engineering.