Using checklists to review static analysis warnings

  • Authors:
  • Nathaniel Ayewah;William Pugh

  • Affiliations:
  • Univ. of Maryland, College Park, MD;Univ. of Maryland, College Park, MD

  • Venue:
  • Proceedings of the 2nd International Workshop on Defects in Large Software Systems: Held in conjunction with the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2009)
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Static analysis tools find silly mistakes, confusing code, bad practices and property violations. But software developers and organizations may or may not care about all these warnings, depending on how they impact code behavior and other factors. In the past, we have tried to identify important warnings by asking users to rate them as severe, low impact or not a bug. In this paper, we observe that the user's rating may be more complicated depending on whether the warning is feasible, changes code behavior, occurs in deployed code and other factors. To better model this, we ask users to review warnings using a checklist which enables more detailed reviews. We find that reviews are consistent across users and across checklist questions, though some users may disagree about whether to fix or filter out certain bug classes.