Can we evaluate the quality of software engineering experiments?
Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
Refining the systematic literature review process--two participant-observer case studies
Empirical Software Engineering
A quality checklist for technology-centred testing studies
EASE'09 Proceedings of the 13th international conference on Evaluation and Assessment in Software Engineering
Evidence in software architecture, a systematic literature review
Proceedings of the 17th International Conference on Evaluation and Assessment in Software Engineering
Hi-index | 0.00 |
Background: A recent set of guidelines for software engineering systematic literature reviews (SLRs) includes a list of quality criteria obtained from the literature. The guidelines suggest that the list can be used to construct a tailored set of questions to evaluate the quality of primary studies. Aim: This paper aims to evaluate whether the list of quality criteria help researchers construct tailored quality checklists. Method: We undertook a participant-observer case study to investigate the list of quality criteria. The "case" in this study was the planning stage of a systematic literature review on unit testing. Results: The checklists in our SLR guidelines do not provide sufficient help with the construction of a quality checklist for a specific SLR either for novices or for experienced researchers. However, the checklists are reasonably complete and lead to the use of a common terminology for quality questions selected for a specific systematic literature review. Conclusions: The guidelines document should be amended to include a much shorter generic checklist. Researchers might find it useful to adopt a team-based process for quality checklist construction and provide suggestions for answering quality checklist questions.