Using software performance curves for dependable and cost-efficient service hosting
Proceedings of the 2nd International Workshop on the Quality of Service-Oriented Software Systems
Performance cockpit: systematic measurements and analyses
Proceedings of the 2nd ACM/SPEC International Conference on Performance engineering
Integration of event-based communication in the palladio software quality prediction framework
Proceedings of the joint ACM SIGSOFT conference -- QoSA and ACM SIGSOFT symposium -- ISARCS on Quality of software architectures -- QoSA and architecting critical systems -- ISARCS
Ginpex: deriving performance-relevant infrastructure properties through goal-oriented experiments
Proceedings of the joint ACM SIGSOFT conference -- QoSA and ACM SIGSOFT symposium -- ISARCS on Quality of software architectures -- QoSA and architecting critical systems -- ISARCS
Efficient experiment selection in automated software performance evaluations
EPEW'11 Proceedings of the 8th European conference on Computer Performance Engineering
Systematic adoption of genetic programming for deriving software performance curves
ICPE '12 Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
Compositional performance abstractions of software connectors
ICPE '12 Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
Integrating software performance curves with the palladio component model
ICPE '12 Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
A generic methodology to derive domain-specific performance feedback for developers
Proceedings of the 34th International Conference on Software Engineering
Automated inference of goal-oriented performance prediction functions
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
Systematic guidance in solving performance and scalability problems
Proceedings of the 18th international doctoral symposium on Components and architecture
Systematic performance evaluation based on tailored benchmark applications
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
An experiment specification language for goal-driven, automated performance evaluations
Proceedings of the 28th Annual ACM Symposium on Applied Computing
Electronic Notes in Theoretical Computer Science (ENTCS)
jBM: a CPU benchmarking tool for cloud environments
Proceedings of the 6th International ICST Conference on Simulation Tools and Techniques
Performance-Aware design of web application front-ends
ICWE'13 Proceedings of the 13th international conference on Web Engineering
Hi-index | 0.00 |
Evaluating the performance (timing behavior, throughput, and resource utilization) of a software system becomes more and more challenging as today’s enterprise applications are built on a large basis of existing software (e.g. middleware, legacy applications, and third party services). As the performance of a system is affected by multiple factors on each layer of the system, performance analysts require detailed knowledge about the system under test and have to deal with a huge number of tools for benchmarking, monitoring, and analyzing. In practice, performance analysts try to handle the complexity by focusing on certain aspects, tools, or technologies. However, these isolated solutions are inefficient due to the small reuse and knowledge sharing. The Performance Cockpit presented in this paper is a framework that encapsulates knowledge about performance engineering, the system under test, and analyses in a single application by providing a flexible, plug-in based architecture. We demonstrate the value of the framework by means of two different case studies.