An assessment of published evaluations of requirements management tools

  • Authors:
  • Austen Rainera;Sarah Beechamb;Cei Sandersona

  • Affiliations:
  • School of Computer Science, University of Hertfordshire, Hatfield, Hertfordshire, UK;School of Information Systems, Computing and Mathematics, Brunel University, Middlesex, UK;School of Computer Science, University of Hertfordshire, Hatfield, Hertfordshire, UK

  • Venue:
  • EASE'09 Proceedings of the 13th international conference on Evaluation and Assessment in Software Engineering
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Context: The traditional literature review is a low cost, relatively quick but potentially ineffective method for evaluating tools. Practitioners appear to place a greater emphasis on the practical constraints of an evaluation (e.g. that it is low cost and quick) and the efficacy of the technology to the company, rather than on generic scientific results. By contrast, academia appears to place greater emphasis on theory confirmation, rigour and validity, and their literature reviews focus on literature published in peer-reviewed journals and conferences, and tend not to consider the trade and 'grey' literature. Objectives: To assess the quality and quantity of published evaluations of requirements management tools (RMTs) reported in the academic, 'grey' and trade literatures. Method: Three independent literature reviews were conducted to identify published evaluations of RMTs. The three reviews were conducted by three different types of reviewers: a practitioner in a company, an experienced researcher, and 19 final-year undergraduate students. The researcher and the students followed a version of Evidence Based Software Engineering to undertake their literature reviews. The practitioner undertook an ad hoc literature review. Publications were then screened to select higherquality evaluations, which were then analysed to identify the RMTs evaluated. Results: The three literature reviews found a total of 28 evaluations referring to 14 RMTs, of which 6 evaluations were duplicates, giving 22 unique evaluations. Evaluations were identified between approximately the year 2000 and 2007, with an average of about 3 evaluations published per year. Conclusions/implications: Given the number of commercial RMTs on the market (40), and the few evaluations published per year, there are surprisingly few higher-quality evaluations. There is a noticeable bias toward evaluating the market leading RMTs. Given the rate of change in the IT industry, there may be a need to re-evaluate RMTs every two years or less. Overall, there appears to be a poor 'base' of up-to-date published evaluations of RMTs available for use in literature reviews. Literature reviews would appear to be useful for short-listing RMTs for subsequent in-company evaluation, and for benchmarking, but care should be taken to include non-market leading RMTs in the shortlisting.