Building empirical support for automated code smell detection

  • Authors:
  • Jan Schumacher;Nico Zazworka;Forrest Shull;Carolyn Seaman;Michele Shaw

  • Affiliations:
  • University of Applied Sciences, Mannheim, Germany;Fraunhofer Center, College Park, MD;Fraunhofer Center, College Park, MD;Fraunhofer Center, College Park, MD and UMBC, Baltimore, MD;Fraunhofer Center, College Park, MD

  • Venue:
  • Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Identifying refactoring opportunities in software systems is an important activity in today's agile development environments. The concept of code smells has been proposed to characterize different types of design shortcomings in code. Additionally, metric-based detection algorithms claim to identify the "smelly" components automatically. This paper presents results for an empirical study performed in a commercial environment. The study investigates the way professional software developers detect god class code smells, then compares these results to automatic classification. The results show that, even though the subjects perceive detecting god classes as an easy task, the agreement for the classification is low. Misplaced methods are a strong driver for letting subjects identify god classes as such. Earlier proposed metric-based detection approaches performed well compared to the human classification. These results lead to the conclusion that an automated metric-based pre-selection decreases the effort spent on manual code inspections. Automatic detection accompanied by a manual review increases the overall confidence in the results of metric-based classifiers.