Revisiting common bug prediction findings using effort-aware models

  • Authors:
  • Yasutaka Kamei;Shinsuke Matsumoto;Akito Monden;Ken-ichi Matsumoto;Bram Adams;Ahmed E. Hassan

  • Affiliations:
  • Software Analysis and Intelligence Lab (SAIL), School of Computing, Queen's University, Canada;Graduate School of Engineering, Kobe University, Japan;Graduate School of Information Science, Nara Institute of Science and Technology, Japan;Graduate School of Information Science, Nara Institute of Science and Technology, Japan;Software Analysis and Intelligence Lab (SAIL), School of Computing, Queen's University, Canada;Software Analysis and Intelligence Lab (SAIL), School of Computing, Queen's University, Canada

  • Venue:
  • ICSM '10 Proceedings of the 2010 IEEE International Conference on Software Maintenance
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bug prediction models are often used to help allocate software quality assurance efforts (e.g. testing and code reviews). Mende and Koschke have recently proposed bug prediction models that are effort-aware. These models factor in the effort needed to review or test code when evaluating the effectiveness of prediction models, leading to more realistic performance evaluations. In this paper, we revisit two common findings in the bug prediction literature: 1) Process metrics (e.g., change history) outperform product metrics (e.g., LOC), 2) Package-level predictions outperform file-level predictions. Through a case study on three projects from the Eclipse Foundation, we find that the first finding holds when effort is considered, while the second finding does not hold. These findings validate the practical significance of prior findings in the bug prediction literature and encourage their adoption in practice.