Accuracy of software quality models over multiple releases

  • Authors:
  • Taghi M. Khoshgoftaar;Edward B. Allen;Wendell D. Jones;John P. Hudepohl

  • Affiliations:
  • Department of Computer Science and Engineering, Florida Atlantic University, Boca Raton, FL 33431‐0991, USA E-mail: taghi@cse.fau.edu;Department of Computer Science and Engineering, Florida Atlantic University, Boca Raton, FL 33431‐0991, USA E-mail: taghi@cse.fau.edu;EMERALD, a Business Unit of Nortel Networks, P.O. Box 13010, Research Triangle Park, NC 27709‐3478, USA;EMERALD, a Business Unit of Nortel Networks, P.O. Box 13010, Research Triangle Park, NC 27709‐3478, USA

  • Venue:
  • Annals of Software Engineering
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many evolving mission‐critical systems must have high software reliability. However, it is often difficult to identify fault‐prone modules early enough in a development cycle to guide software enhancement efforts effectively and efficiently. Software quality models can yield timely predictions of membership in the fault‐prone class on a module‐by‐module basis, enabling one to target enhancement techniques. However, it is an open empirical question, “Can a software quality model remain useful over several releases?” Most prior software quality studies have examined only one release of a system, evaluating the model with modules from the same release. We conducted a case study of a large legacy telecommunications system where measurements on one software release were used to build models, and three subsequent releases of the same system were used to evaluate model accuracy. This is a realistic assessment of model accuracy, closely simulating actual use of a software quality model. A module was considered fault‐prone if any of its faults were discovered by customers. These faults are extremely expensive due to consequent loss of service and emergency repair efforts. We found that the model maintained useful accuracy over several releases. These findings are initial empirical evidence that software quality models can remain useful as a system is maintained by a stable software development process.