Comparing design and code metrics for software quality prediction

  • Authors:
  • Yue Jiang;Bojan Cuki;Tim Menzies;Nick Bartlow

  • Affiliations:
  • West Virginia University, Morgantown, WV, USA;West Virginia University, Morgantown, WV, USA;West Virginia University, Morgantown, WV, USA;West Virginia University, Morgantown, WV, USA

  • Venue:
  • Proceedings of the 4th international workshop on Predictor models in software engineering
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The prediction of fault-prone modules continues to attract interest due to the significant impact it has on software quality assurance. One of the most important goals of such techniques is to accurately predict the modules where faults are likely to hide as early as possible in the development lifecycle. Design, code, and most recently, requirements metrics have been successfully used for predicting fault-prone modules. The goal of this paper is to compare the performance of predictive models which use design-level metrics with those that use code-level metrics and those that use both. We analyze thirteen datasets from NASA Metrics Data Program which offer design as well as code metrics. Using a range of modeling techniques and statistical significance tests, we confirmed that models built from code metrics typically outperform design metrics based models. However, both types of models prove to be useful as they can be constructed in different project phases. Code-based models can be used to increase the performance of design-level models and, thus, increase the efficiency of assigning verification and validation activities late in the development lifecycle. We also conclude that models that utilize a combination of design and code level metrics outperform models which use either one or the other metric set.