Systematic testing of refactoring engines on real software projects

  • Authors:
  • Milos Gligoric;Farnaz Behrang;Yilong Li;Jeffrey Overbey;Munawar Hafiz;Darko Marinov

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Urbana, IL;Auburn University, Auburn, AL;University of Illinois at Urbana-Champaign, Urbana, IL;Auburn University, Auburn, AL;Auburn University, Auburn, AL;University of Illinois at Urbana-Champaign, Urbana, IL

  • Venue:
  • ECOOP'13 Proceedings of the 27th European conference on Object-Oriented Programming
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Testing refactoring engines is a challenging problem that has gained recent attention in research. Several techniques were proposed to automate generation of programs used as test inputs and to help developers in inspecting test failures. However, these techniques can require substantial effort for writing test generators or finding unique bugs, and do not provide an estimate of how reliable refactoring engines are for refactoring tasks on real software projects. This paper evaluates an end-to-end approach for testing refactoring engines and estimating their reliability by (1) systematically applying refactorings at a large number of places in well-known, open-source projects and collecting failures during refactoring or while trying to compile the refactored projects, (2) clustering failures into a small, manageable number of failure groups, and (3) inspecting failures to identify non-duplicate bugs. By using this approach on the Eclipse refactoring engines for Java and C, we already found and reported 77 new bugs for Java and 43 for C. Despite the seemingly large numbers of bugs, we found these refactoring engines to be relatively reliable, with only 1.4% of refactoring tasks failing for Java and 7.5% for C.