A microbenchmark case study and lessons learned

  • Authors:
  • Joseph Yossi Gil;Keren Lenz;Yuval Shimron

  • Affiliations:
  • Israel Institute of Technology, Technion City, Haifa, Israel;Israel Institute of Technology, Technion City, Haifa, Israel;Israel Institute of Technology, Technion City, Haifa, Israel

  • Venue:
  • Proceedings of the compilation of the co-located workshops on DSM'11, TMC'11, AGERE!'11, AOOPES'11, NEAT'11, & VMIL'11
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The extra abstraction layer posed by the virtual machine, the JIT compilation cycles and the asynchronous garbage collection are the main reasons that make the benchmarking of Java code a delicate task. The primary weapon in battling these is replication: "billions and billions of runs", is phrase sometimes used by practitioners. This paper describes a case study, which consumed hundreds of hours of CPU time, and tries to characterize the inconsistencies in the results we encountered.