The Performance Cockpit Approach: A Framework For Systematic Performance Evaluations
SEAA '10 Proceedings of the 2010 36th EUROMICRO Conference on Software Engineering and Advanced Applications
Statistical inference of software performance models for parametric performance completions
QoSA'10 Proceedings of the 6th international conference on Quality of Software Architectures: research into Practice - Reality and Gaps
Efficient experiment selection in automated software performance evaluations
EPEW'11 Proceedings of the 8th European conference on Computer Performance Engineering
A generic methodology to derive domain-specific performance feedback for developers
Proceedings of the 34th International Conference on Software Engineering
Hi-index | 0.00 |
Measurement-based performance evaluations are heavily used in practice to test system behavior under load, identify resource bottlenecks, or size system landscapes. Existing literature provides guidelines on how to conduct performance evaluations correctly. Many tools (e.g. for load generation, monitoring, or statistical analyses)provide basic assets to conduct such evaluations. However, the wide range of knowledge required to conduct performance evaluations and control the available tools restricts the group of users to a small set of performance experts. Additionally, the large effort to set up systems for performance evaluations often limits their application. In this demo paper, we present a framework that encapsulates best practices and allows for separation of concerns regarding the different aspects of a performance evaluation. The Performance Cockpit provides a single point of configuration for performance analysts and orchestrates plug-ins provided by corresponding experts. The resulting flexibility and automation enables new approaches for quality assurance and lowers the hurdles for conducting performance evaluations.