Software quality control and prediction model for maintenance
Annals of Software Engineering
Table Oriented Metrics for Relational Databases
Software Quality Control
Quantitative Analysis of Development Defects to Guide Testing: A Case Study
Software Quality Control
Deriving a Fault Architecture to Guide Testing
Software Quality Control
Deriving a Fault Architecture from Defect History
ISSRE '99 Proceedings of the 10th International Symposium on Software Reliability Engineering
Predicting Deviations in Software Quality by Using Relative Critical Value Deviation Metrics
ISSRE '99 Proceedings of the 10th International Symposium on Software Reliability Engineering
On the Repeatability of Metric Models and Metrics across Software Builds
ISSRE '00 Proceedings of the 11th International Symposium on Software Reliability Engineering
Toward a Software Testing and Reliability Early Warning Metric Suite
Proceedings of the 26th International Conference on Software Engineering
A survey of component based system quality assurance and assessment
Information and Software Technology
Software defect prediction using semi-supervised learning with dimension reduction
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
An adaptive approach with active learning in software fault prediction
Proceedings of the 8th International Conference on Predictive Models in Software Engineering
Hi-index | 0.00 |
A model is developed for validating and applying metrics for quality control, using the Space Shuttle flight software as an example. We validate metrics with respect to a quality factor in accordance with the metrics validation methodology previously developed. Boolean discriminant functions (BDFs) are developed for use in the quality control process. These functions make fewer mistakes in classifying software that is low quality than is the case when linear vectors of metrics are used because the BDFs include additional information for discriminating quality: critical values. Critical values are threshold values of metrics that are used to either accept or reject modules when the modules are inspected during the quality control process. A series of nonparametric statistical methods is used to: 1) identify a set of candidate metrics for further analysis; 2) identify the critical values of the metrics, and 3) find the optimal function of metrics and critical values. A marginal analysis should be performed when making a decision about how many metrics to use in a quality control process. Certain metrics are dominant in their effects on classifying quality and additional metrics are not needed to accurately classify quality. This effect is called dominance. Related to the property of dominance is the property of concordance, which is the degree to which a set of metrics produces the same result in classifying software quality. A high value of concordance implies that additional metrics will not make a significant contribution to accurately classifying quality; hence, these metrics are redundant.