Applied multivariate statistical analysis
Applied multivariate statistical analysis
Software testing techniques (2nd ed.)
Software testing techniques (2nd ed.)
IEEE Transactions on Software Engineering - Special issue on software reliability
Experimental software engineering: a report on the state of the art
Proceedings of the 17th international conference on Software engineering
Software metrics (2nd ed.): a rigorous and practical approach
Software metrics (2nd ed.): a rigorous and practical approach
Predicting Fault-Prone Software Modules in Telephone Switches
IEEE Transactions on Software Engineering
Globally Optimal Fuzzy Decision Trees for Classification and Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
Experimentation in software engineering: an introduction
Experimentation in software engineering: an introduction
The prediction of faulty classes using object-oriented design metrics
Journal of Systems and Software
Emerald: Software Metrics and Models on the Desktop
IEEE Software
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
Experience from Replicating Empirical Studies on Prediction Models
METRICS '02 Proceedings of the 8th International Symposium on Software Metrics
Software Metrics Model For Integrating Quality Control And Prediction
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
Analogy-Based Practical Classification Rules for Software Quality Estimation
Empirical Software Engineering
Comparative Assessment of Software Quality Classification Techniques: An Empirical Case Study
Empirical Software Engineering
Accuracy and efficiency comparisons of single- and multi-cycled software classification models
Information and Software Technology
Ensemble missing data techniques for software effort prediction
Intelligent Data Analysis
A novel composite model approach to improve software quality prediction
Information and Software Technology
Hi-index | 0.00 |
Developing high-quality software within the allotted time and budget is a key element for a productive and successful software project. Software quality classification models that provide a risk-based quality estimation, such as fault-prone (fp) and not fault-prone (nfp), have proven their usefulness as software quality assurance techniques. However, their usefulness is largely dependent on the availability of resources for deploying quality improvements to modules predicted as fp. Since every project has its own special needs and specifications, we feel a classification modeling approach based on resource availability is greatly warranted.We propose and demonstrate the use of a resource-based measure, i.e., "Modified Expected Cost of Misclassification" (MECM), for selecting and evaluating classification models. It is an extension of the "Expected Cost of Misclassification" (ECM) measure, which we have previously applied for model-evaluation purposes. The proposed measure facilitates building resource-oriented classification models and overcomes the limitation of ECM, which assumes that enough resources are available to enhance all modules predicted as fp. The primary aspect of MECM is that it penalizes a model, in terms of costs of misclassifications, if the model predicts more number of fp modules than the number that can be enhanced with the available resources. Based on the resources available for improving quality of software modules, a practitioner can use the proposed methodology to select a model that bestsuits the projects goals. Hence, the best possible and practical usage of the available resources can be achieved. The application, analysis, and benefits of MECM is shown by developing models using Logistic Regression. It is concluded that the use of MECM is a promising approach for practical software quality improvement.