Applying design of experiments to software testing: experience report
ICSE '97 Proceedings of the 19th international conference on Software engineering
Model-based testing in practice
Proceedings of the 21st international conference on Software engineering
Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects
Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects
A comparison of empirical and model-driven optimization
PLDI '03 Proceedings of the ACM SIGPLAN 2003 conference on Programming language design and implementation
An Investigation of the Applicability of Design of Experiments to Software Testing
SEW '02 Proceedings of the 27th Annual NASA Goddard Software Engineering Workshop (SEW-27'02)
Continuous Compilation: A New Approach to Aggressive and Adaptive Code Transformation
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
ControlWare: A Middleware Architecture for Feedback Control of Software Performance
ICDCS '02 Proceedings of the 22 nd International Conference on Distributed Computing Systems (ICDCS'02)
Addressing the middleware configuration challenges using model-based techniques
ACM-SE 42 Proceedings of the 42nd annual Southeast regional conference
Skoll: Distributed Continuous Quality Assurance
Proceedings of the 26th International Conference on Software Engineering
CCMPerf: A Benchmarking Tool for CORBA Component Model Implementations
RTAS '04 Proceedings of the 10th IEEE Real-Time and Embedded Technology and Applications Symposium
Covering arrays for efficient fault characterization in complex configuration spaces
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
A method for evaluating the impact of software configuration parameters on e-commerce sites
Proceedings of the 5th international workshop on Software and performance
Distributed performance testing using statistical modeling
A-MOST '05 Proceedings of the 1st international workshop on Advances in model-based testing
Covering Arrays for Efficient Fault Characterization in Complex Configuration Spaces
IEEE Transactions on Software Engineering
Automated benchmarking and analysis tool
valuetools '06 Proceedings of the 1st international conference on Performance evaluation methodolgies and tools
The Future of Software Performance Engineering
FOSE '07 2007 Future of Software Engineering
IEEE Transactions on Software Engineering
Integrating Software Models and Platform Models for Performance Analysis
IEEE Transactions on Software Engineering
A framework for measurement based performance modeling
WOSP '08 Proceedings of the 7th international workshop on Software and performance
Computational Statistics & Data Analysis
Community-based, collaborative testing and analysis
Proceedings of the FSE/SDP workshop on Future of software engineering research
Monitoring, analysis, and testing of deployed software
Proceedings of the FSE/SDP workshop on Future of software engineering research
Human performance regression testing
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
Developers of highly configurable performance-intensive software systems often use a type of in-house performance-oriented "regression testing" to ensure that their modifications have not adversely affected their software's performance across its large configuration space. Unfortunately, time and resource constraints often limit developers to in-house testing of a small number of configurations and unreliable extrapolation from these results to the entire configuration space, which allows many performance bottlenecks and sources of QoS degradation to escape detection until systems are fielded. To improve performance assessment of evolving systems across large configuration spaces, we have developed a distributed continuous quality assurance (DCQA) process called main effects screening that uses in-the-field resources to execute formally designed experiments to help reduce the configuration space, thereby allowing developers to perform more targeted in-house QA. We have evaluated this process via several feasibility studies on several large, widely-used performance-intensive software systems. Our results indicate that main effects screening can detect key sources of performance degradation in large-scale systems with significantly less effort than conventional techniques.