Evaluating the accuracy of Java profilers

  • Authors:
  • Todd Mytkowicz;Amer Diwan;Matthias Hauswirth;Peter F. Sweeney

  • Affiliations:
  • University of Colorado, Boulder, CO, USA;University of Colorado, Boulder, CO, USA;University of Lugano, Lugano, Switzerland;IBM Research, Hawthorne, NY, USA

  • Venue:
  • PLDI '10 Proceedings of the 2010 ACM SIGPLAN conference on Programming language design and implementation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Performance analysts profile their programs to find methods that are worth optimizing: the "hot" methods. This paper shows that four commonly-used Java profilers (xprof , hprof , jprofile, and yourkit) often disagree on the identity of the hot methods. If two profilers disagree, at least one must be incorrect. Thus, there is a good chance that a profiler will mislead a performance analyst into wasting time optimizing a cold method with little or no performance improvement. This paper uses causality analysis to evaluate profilers and to gain insight into the source of their incorrectness. It shows that these profilers all violate a fundamental requirement for sampling based profilers: to be correct, a sampling-based profilermust collect samples randomly. We show that a proof-of-concept profiler, which collects samples randomly, does not suffer from the above problems. Specifically, we show, using a number of case studies, that our profiler correctly identifies methods that are important to optimize; in some cases other profilers report that these methods are cold and thus not worth optimizing.