Utilizing verification and validation certificates to estimate software defect density
Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
Early estimation of defect density using an in-process Haskell metrics model
A-MOST '05 Proceedings of the 1st international workshop on Advances in model-based testing
Early estimation of software quality using in-process testing metrics: a controlled case study
3-WoSQ Proceedings of the third workshop on Software quality
Towards a deeper understanding of test coverage
Journal of Software Maintenance and Evolution: Research and Practice
Potential of open source systems as project repositories for empirical studies working group results
Proceedings of the 2006 international conference on Empirical software engineering issues: critical assessment and future directions
Predicting software complexity by means of evolutionary testing
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
Hi-index | 0.00 |
The demand for quality in software applications has grown, and awareness of software testing-related issues plays an important role towards that. Unfortunately in industrial practice, information on software field quality of a product tends to become available too late in the software lifecycle to affordably guide corrective actions. An important step towards remediation of this problem lies in the ability to provide an early estimation of post-release field quality. This dissertation presents a suite of nine in-process metrics, the Software Testing and Reliability Early Warning (STREW) metric suite, that leverages the software testing effort to provide (1) an estimate of post-release field quality early in software development phases, and (2) a color-coded, feedback to the developers on the quality of their testing effort to identify areas that could benefit from more testing. We built and validated our model via a three-phase case study approach which progressively involved 22 small-scale academic projects, 27 medium-sized open source projects, and five large-scale industrial projects. The ability of the STREW metric suite to estimate post-release field quality was evaluated using statistical regression models in the three different environments. The error in estimation and the sensitivity of the predictions indicate the STREW metric suite can effectively be used to predict post-release software field quality. Further, the test quality feedback was found to be statistically significant with the post-release software quality, indicating the ability of the STREW metrics to provide meaningful feedback on the quality of the testing effort.