Skoll: Distributed Continuous Quality Assurance
Proceedings of the 26th International Conference on Software Engineering
Proceedings of the 27th international conference on Software engineering
Repeated results analysis for middleware regression benchmarking
Performance Evaluation - Performance modelling and evaluation of high-performance parallel and distributed systems
Precise regression benchmarking with random effects: improving mono benchmark results
EPEW'06 Proceedings of the Third European conference on Formal Methods and Stochastic Models for Performance Evaluation
Ginpex: deriving performance-relevant infrastructure properties through goal-oriented experiments
Proceedings of the joint ACM SIGSOFT conference -- QoSA and ACM SIGSOFT symposium -- ISARCS on Quality of software architectures -- QoSA and architecting critical systems -- ISARCS
Capturing performance assumptions using stochastic performance logic
ICPE '12 Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
Hi-index | 0.00 |
Benchmarking is an important performance evaluation technique that provides performance data representative of real systems. Such data can be used to verify the results of performance modeling and simulation, or to detect performance changes. Automated benchmarking is an increasingly popular approach to tracking performance changes during software development, which gives developers a timely feedback on their work. In contrast with the advances in modeling and simulation tools, the tools for automated benchmarking are usually being implemented ad-hoc for each project, wasting resources and limiting functionality.We present the result of project BEEN, a generic tool for automated benchmarking in a heterogeneous distributed environment. BEEN automates all steps of a benchmark experiment from software building and deployment through measurement and load monitoring to the evaluation of results. The notable features include separation of measurement from the evaluation and ability to adaptively scale the benchmark experiment based on the evaluation. BEEN has been designed to facilitate automated detection of performance changes during software development (regression benchmarking).