Exploiting hardware performance counters with flow and context sensitive profiling
Proceedings of the ACM SIGPLAN 1997 conference on Programming language design and implementation
A framework for reducing the cost of instrumented code
Proceedings of the ACM SIGPLAN 2001 conference on Programming language design and implementation
Semantic Diff: A Tool for Summarizing the Effects of Modifications
ICSM '94 Proceedings of the International Conference on Software Maintenance
A Differencing Algorithm for Object-Oriented Programs
Proceedings of the 19th IEEE international conference on Automated software engineering
Matching execution histories of program versions
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
UMLDiff: an algorithm for object-oriented design differencing
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
Accurate, efficient, and adaptive calling context profiling
Proceedings of the 2006 ACM SIGPLAN conference on Programming language design and implementation
JavaTM just-in-time compiler and virtual machine improvements for server and middleware applications
VM'04 Proceedings of the 3rd conference on Virtual Machine Research And Technology Symposium - Volume 3
Comprehensive isomorphic subtree enumeration
CASES '08 Proceedings of the 2008 international conference on Compilers, architectures and synthesis for embedded systems
Tracking performance across software revisions
PPPJ '09 Proceedings of the 7th International Conference on Principles and Practice of Programming in Java
Proceedings of the 5th international symposium on Software visualization
A step towards transparent integration of input-consciousness into dynamic program optimizations
Proceedings of the 2011 ACM international conference on Object oriented programming systems languages and applications
Debugging performance failures
Proceedings of the 6th Workshop on Dynamic Languages and Applications
Execution profiling blueprints
Software—Practice & Experience
Tracking down software changes responsible for performance loss
Proceedings of the International Workshop on Smalltalk Technologies
Tracking performance failures with rizel
Proceedings of the 2013 International Workshop on Principles of Software Evolution
Hi-index | 0.00 |
Although applications running on virtual machines, such as Java, can achieve platform independence, performance evaluation and analysis becomes difficult due to extra intermediate layers and the dynamic nature of virtual execution environment. We present a framework for analyzing performance across multiple runs of a program, possibly in dramatically different execution environments. Our framework is based upon our prior lightweight instrumentation technique for building a calling context tree (CCT) of methods at runtime. We first represent each run of a program by a CCT, annotating its edges and nodes with various performance attributes such as call counts or elapsed times. We then identify components of the CCTs that are topologically identical but with significant performance-attribute differences. Next, the topological differences of two CCTs are identified, while ignoring the performance attributes. Finally, we identify the differences in both topology and performance attributes that can be fed back to the software developers or performance analyzers for further scrutiny. We have applied our methodology to a number of well-known Java benchmarks and a large J2EE application, using call counters as the performance attribute. Our results indicate that this approach can efficiently and effectively relate differences to a small percentage of nodes on the CCT. We present an iterative framework for program analysis, where topological changes are performed to identify differences in CCTs. For most of the test programs, applying a few topological changes such as deletion, addition, and renaming of nodes - are needed to make any two CCTs from the same program identical, whereas less than 2% of performance-attribute changes are needed to achieve a 90% overlap of any two CCTs in performance attributes, after the two CCTs are topologically matched. We have applied our framework to identify subtle configuration differences for complex server applications.