Silhouettes: a graphical aid to the interpretation and validation of cluster analysis
Journal of Computational and Applied Mathematics
Generating test suites for software load testing
ISSTA '94 Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis
ICSE '94 Proceedings of the 16th international conference on Software engineering
Experience with Performance Testing of Software Systems: Issues, an Approach, and Case Study
IEEE Transactions on Software Engineering
Time, clocks, and the ordering of events in a distributed system
Communications of the ACM
Models of mail server workloads
Performance Evaluation
Will the Real Operational Profile Please Stand Up?
IEEE Software
The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software
IEEE Transactions on Software Engineering
Automated test case generation for the stress testing of multimedia systems
Software—Practice & Experience
Realistic Load Testing ofWeb Applications
CSMR '06 Proceedings of the Conference on Software Maintenance and Reengineering
Replaying development history to assess the effectiveness of change propagation tools
Empirical Software Engineering
Performance engineering in industry: current practices and adoption challenges
WOSP '07 Proceedings of the 6th international workshop on Software and performance
A Synthetic Workload Generation Technique for Stress Testing Session-Based Systems
IEEE Transactions on Software Engineering
Synthesizing client load models for performance engineering via web crawling
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
MapReduce: simplified data processing on large clusters
Communications of the ACM - 50th anniversary issue: 1958 - 2008
The IEEE FIPA approach to integrating software agents and web services
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
An automated approach for abstracting execution logs to execution events
Journal of Software Maintenance and Evolution: Research and Practice - Special Issue on Program Comprehension through Dynamic Analysis (PCODA)
Automated anomaly detection and performance modeling of enterprise applications
ACM Transactions on Computer Systems (TOCS)
Z-ranking: using statistical analysis to counter the impact of static analysis approximations
SAS'03 Proceedings of the 10th international conference on Static analysis
Automatic Comparison of Load Tests to Support the Performance Analysis of Large Enterprise Systems
CSMR '10 Proceedings of the 2010 14th European Conference on Software Maintenance and Reengineering
Industrial Case Study on Supporting the Comprehension of System Behaviour under Load
ICPC '11 Proceedings of the 2011 IEEE 19th International Conference on Program Comprehension
WCRE '11 Proceedings of the 2011 18th Working Conference on Reverse Engineering
Identifying performance deviations in thread pools
ICSM '11 Proceedings of the 2011 27th IEEE International Conference on Software Maintenance
ICST '12 Proceedings of the 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation
Communications of the ACM
Benchmarking approach for designing a mapreduce performance model
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Assisting developers of big data analytics applications when deploying on hadoop clouds
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
Ultra-Large-Scale (ULS) systems face continuously evolving field workloads in terms of activated/disabled feature sets, varying usage patterns and changing deployment configurations. These evolving workloads often have a large impact, on the performance of a ULS system. Hence, continuous load testing is critical to ensuring the error-free operation of such systems. A common challenge facing performance analysts is to validate if a load test closely resembles the current field workloads. Such validation may be performed by comparing execution logs from the load test and the field. However, the size and unstructured nature of execution logs makes such a comparison unfeasible without automated support. In this paper, we propose an automated approach to validate whether a load test resembles the field workload and, if not, determines how they differ by compare execution logs from a load test and the field. Performance analysts can then update their load test cases to eliminate such differences, hence creating more realistic load test cases. We perform three case studies on two large systems: one open-source system and one enterprise system. Our approach identifies differences between load tests and the field with a precision of 75% compared to only 16% for the state-of-the-practice.