Experience with Performance Testing of Software Systems: Issues, an Approach, and Case Study
IEEE Transactions on Software Engineering
The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software
IEEE Transactions on Software Engineering
Pinpoint: Problem Determination in Large, Dynamic Internet Services
DSN '02 Proceedings of the 2002 International Conference on Dependable Systems and Networks
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
A methodology to support load test analysis
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
Mining Performance Regression Testing Repositories for Automated Performance Analysis
QSIC '10 Proceedings of the 2010 10th International Conference on Quality Software
Automated root cause isolation of performance regressions during software development
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Automatic, load-independent detection of performance regressions by transaction profiles
Proceedings of the 2013 International Workshop on Joining AcadeMiA and Industry Contributions to testing Automation
Proceedings of the Eighth International Workshop on Variability Modelling of Software-Intensive Systems
Performance optimization of deployed software-as-a-service applications
Journal of Systems and Software
Hi-index | 0.00 |
The goal of performance regression testing is to check for performance regressions in a new version of a software system. Performance regression testing is an important phase in the software development process. Performance regression testing is very time consuming yet there is usually little time assigned for it. A typical test run would output thousands of performance counters. Testers usually have to manually inspect these counters to identify performance regressions. In this paper, we propose an approach to analyze performance counters across test runs using a statistical process control technique called control charts. We evaluate our approach using historical data of a large software team as well as an open-source software project. The results show that our approach can accurately identify performance regressions in both software systems. Feedback from practitioners is very promising due to the simplicity and ease of explanation of the results.