How Reliable Are Systematic Reviews in Empirical Software Engineering?

  • Authors:
  • Stephen MacDonell;Martin Shepperd;Barbara Kitchenham;Emilia Mendes

  • Affiliations:
  • Auckland University of Technology, Auckland;Brunel University, West London;Keele University, Keele;The University of Auckland, Auckland

  • Venue:
  • IEEE Transactions on Software Engineering
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

BACKGROUND—The systematic review is becoming a more commonly employed research instrument in empirical software engineering. Before undue reliance is placed on the outcomes of such reviews it would seem useful to consider the robustness of the approach in this particular research context. OBJECTIVE—The aim of this study is to assess the reliability of systematic reviews as a research instrument. In particular, we wish to investigate the consistency of process and the stability of outcomes. METHOD—We compare the results of two independent reviews undertaken with a common research question. RESULTS—The two reviews find similar answers to the research question, although the means of arriving at those answers vary. CONCLUSIONS—In addressing a well-bounded research question, groups of researchers with similar domain experience can arrive at the same review outcomes, even though they may do so in different ways. This provides evidence that, in this context at least, the systematic review is a robust research method.