Revisiting the problem of using problem reports for quality assessment

  • Authors:
  • Parastooi Mohagheghi;Reidar Conradi;Jon Arvid Børretzen

  • Affiliations:
  • Norwegian University of Science and Technology, Trondheim, Norway;Norwegian University of Science and Technology, Trondheim, Norway;Norwegian University of Science and Technology, Trondheim, Norway

  • Venue:
  • Proceedings of the 2006 international workshop on Software quality
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we describe our experience with using problem reports from industry for quality assessment. The non-uniform terminology used in problem reports and validity concerns have been subject of earlier research but are far from settled. To distinguish between terms such as defects or errors, we propose to answer three questions on the scope of a study related to what (problem appearance or its cause), where (problems related to software; executable or not; or system), and when (problems recorded in all development life cycles or some of them). Challenges in defining research questions and metrics, collecting and analyzing data, generalizing the results and reporting them are discussed. Ambiguity in defining problem report fields and missing, inconsistent or wrong data threatens the value of collected evidence. Some of these concerns could be settled by answering some basic questions related to the problem reporting fields and improving data collection routines and tools.