A Framework to Compare Alert Ranking Algorithms

  • Authors:
  • Simon Allier;Nicolas Anquetil;Andre Hora;Stephane Ducasse

  • Affiliations:
  • -;-;-;-

  • Venue:
  • WCRE '12 Proceedings of the 2012 19th Working Conference on Reverse Engineering
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

To improve software quality, rule checkers statically check if a software contains violations of good programming practices. On a real sized system, the alerts (rule violations detected by the tool) may be numbered by the thousands. Unfortunately, these tools generate a high proportion of "false alerts", which in the context of a specific software, should not be fixed. Huge numbers of false alerts may render impossible the finding and correction of "true alerts" and dissuade developers from using these tools. In order to overcome this problem, the literature provides different ranking methods that aim at computing the probability of an alert being a "true one". In this paper, we propose a framework for comparing these ranking algorithms and identify the best approach to rank alerts. We have selected six algorithms described in literature. For comparison, we use a benchmark covering two programming languages (Java and Smalltalk) and three rule checkers (Find Bug, PMD, Small Lint). Results show that the best ranking methods are based on the history of past alerts and their location. We could not identify any significant advantage in using statistical tools such as linear regression or Bayesian networks or ad-hoc methods.