Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis
IEEE Transactions on Software Engineering - Special Issue on Artificial Intelligence in Software Applications
The dimensionality of program complexity
ICSE '89 Proceedings of the 11th international conference on Software engineering
Handbook of software reliability engineering
Handbook of software reliability engineering
Software metrics (2nd ed.): a rigorous and practical approach
Software metrics (2nd ed.): a rigorous and practical approach
Proceedings of the Conference on The Future of Software Engineering
Predicting Fault-Prone Software Modules in Embedded Systems with Classification Trees
HASE '99 The 4th IEEE International Symposium on High-Assurance Systems Engineering
A Comparative Study of Predictive Models for Program Changes During System Testing and Maintenance
ICSM '93 Proceedings of the Conference on Software Maintenance
Understanding and predicting effort in software projects
Proceedings of the 25th International Conference on Software Engineering
Application of a Usage Profile in Software Quality Models
CSMR '99 Proceedings of the Third European Conference on Software Maintenance and Reengineering
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
Tree-Based Software Quality Estimation Models For Fault Prediction
METRICS '02 Proceedings of the 8th International Symposium on Software Metrics
Experience from Replicating Empirical Studies on Prediction Models
METRICS '02 Proceedings of the 8th International Symposium on Software Metrics
Modeling Fault-Prone Modules of Subsystems
ISSRE '00 Proceedings of the 11th International Symposium on Software Reliability Engineering
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
Empirical evaluation of defect projection models for widely-deployed production software systems
Proceedings of the 12th ACM SIGSOFT twelfth international symposium on Foundations of software engineering
Predictors of customer perceived software quality
Proceedings of the 27th international conference on Software engineering
Evaluating Software Project Prediction Systems
METRICS '05 Proceedings of the 11th IEEE International Software Metrics Symposium
METRICS '05 Proceedings of the 11th IEEE International Software Metrics Symposium
IEEE Transactions on Neural Networks
Towards a generic model for software quality prediction
Proceedings of the 6th international workshop on Software quality
Quantitative analysis of faults and failures with multiple releases of softpm
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Information and Software Technology
Towards a software failure cost impact model for the customer: an analysis of an open source product
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
Characterizing the differences between pre- and post- release versions of software
Proceedings of the 33rd International Conference on Software Engineering
A framework for defect prediction in specific software project contexts
CEE-SET'08 Proceedings of the Third IFIP TC 2 Central and East European conference on Software engineering techniques
Reducing test effort: A systematic mapping study on existing approaches
Information and Software Technology
Towards mining informal online data to guide component-reuse decisions
Proceedings of the 16th International ACM Sigsoft symposium on Component-based software engineering
Hi-index | 0.00 |
Quantitatively-based risk management can reduce the risks associated with field defects for both software producers and software consumers. In this paper, we report experiences and results from initiating risk-management activities at a large systems development organization. The initiated activities aim to improve product testing (system/integration testing), to improve maintenance resource allocation, and to plan for future process improvements. The experiences we report address practical issues not commonly addressed in research studies: how to select an appropriate modeling method for product testing prioritization and process improvement planning, how to evaluate accuracy of predictions across multiple releases in time, and how to conduct analysis with incomplete information. In addition, we report initial empirical results for two systems with 13 and 15 releases. We present prioritization of configurations to guide product testing, field defect predictions within the first year of deployment to aid maintenance resource allocation, and important predictors across both systems to guide process improvement planning. Our results and experiences are steps towards quantitatively-based risk management.