Experimentation in software engineering: an introduction
Experimentation in software engineering: an introduction
Preliminary guidelines for empirical research in software engineering
IEEE Transactions on Software Engineering
Combining data from reading experiments in software inspections: a feasibility study
Lecture notes on empirical software engineering
ISESE '04 Proceedings of the 2004 International Symposium on Empirical Software Engineering
Comparing the Fault Detection Effectiveness of N-way and Random Test Suites
ISESE '04 Proceedings of the 2004 International Symposium on Empirical Software Engineering
ISESE '04 Proceedings of the 2004 International Symposium on Empirical Software Engineering
ISESE '04 Proceedings of the 2004 International Symposium on Empirical Software Engineering
A Survey of Controlled Experiments in Software Engineering
IEEE Transactions on Software Engineering
REBSE '07 Proceedings of the Second International Workshop on Realising Evidence-Based Software Engineering
Journal of Systems and Software
Proceedings of the ACM international conference on Object oriented programming systems languages and applications
Faith, hope, and love: an essay on software science's neglect of human factors
Proceedings of the ACM international conference on Object oriented programming systems languages and applications
Proceedings of the 7th symposium on Dynamic languages
Proceedings of the 11th annual international conference on Aspect-oriented Software Development
Preliminary results of a study of the completeness and clarity of structured abstracts
EASE'07 Proceedings of the 11th international conference on Evaluation and Assessment in Software Engineering
Information and Software Technology
Hi-index | 0.01 |
Background. Several researchers have criticized the standards of performing and reporting empirical studies in software engineering. In order to address this problem, Andreas Jedlitschka and Dietmar Pfahl have produced reporting guidelines for controlled experiments in software engineering. They pointed out that their guidelines needed evaluation. We agree that guidelines need to be evaluated before they can be widely adopted. If guidelines are flawed, they will cause more problems that they solve.Aim. The aim of this paper is to present the method we used to evaluate the guidelines and report the results of our evaluation exercise. We suggest our evaluation process may be of more general use if reporting guidelines for other types of empirical study are developed.Method. We used perspective-based inspections to perform a theoretical evaluation of the guidelines. A separate inspection was performed for each perspective. The perspectives used were: Researcher, Practitioner/Consultant, Meta-analyst, Replicator, Reviewer and Author. Apart from the Author perspective, the inspections were based on a set of questions derived by brainstorming. The inspection using the Author perspective reviewed each section of the guidelines sequentially. Results. The question-based perspective inspections detected 42 issues where the guidelines would benefit from amendment or clarification and 8 defects.Conclusions. Reporting guidelines need to specify what information goes into what section and avoid excessive duplication. Software engineering researchers need to be cautious about adopting reporting guidelines that differ from those used by other disciplines. The current guidelines need to be revised and the revised guidelines need to be subjected to further theoretical and empirical validation. Perspective-based inspection is a useful validation method but the practitioner/consultant perspective presents difficulties.