Regression modelling of software quality: empirical investigation
Journal of Electronic Materials
Estimating the Probability of Failure When Testing Reveals No Failures
IEEE Transactions on Software Engineering
Methodology for Validating Software Metrics
IEEE Transactions on Software Engineering
Faults on its sleeve: amplifying software reliability testing
ISSTA '93 Proceedings of the 1993 ACM SIGSOFT international symposium on Software testing and analysis
Comments on 'A Metrics Suite for Object Oriented Design'
IEEE Transactions on Software Engineering
A Validation of Object-Oriented Design Metrics as Quality Indicators
IEEE Transactions on Software Engineering
Extreme programming explained: embrace change
Extreme programming explained: embrace change
Building Knowledge through Families of Experiments
IEEE Transactions on Software Engineering
Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics
IEEE Transactions on Software Engineering
Estimating software fault-proneness for tuning testing activities
Proceedings of the 22nd international conference on Software engineering
Deriving models of software fault-proneness
SEKE '02 Proceedings of the 14th international conference on Software engineering and knowledge engineering
Test Driven Development: By Example
Test Driven Development: By Example
An empirical evaluation of fault-proneness models
Proceedings of the 24th International Conference on Software Engineering
A Metrics Suite for Object Oriented Design
IEEE Transactions on Software Engineering
Software Metrics Model For Quality Control
METRICS '97 Proceedings of the 4th International Symposium on Software Metrics
An Integrated Process and Product Model
METRICS '98 Proceedings of the 5th International Symposium on Software Metrics
An Empirical Study on Object-Oriented Metrics
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
Investigation of Logistic Regression as a Discriminant of Software Quality
METRICS '01 Proceedings of the 7th International Symposium on Software Metrics
Some issues in multi-phase software reliability modeling
CASCON '93 Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative research: software engineering - Volume 1
Software test effort estimation
ACM SIGSOFT Software Engineering Notes
Dependability metrics
Classification of software artifacts based on structural information
KES'10 Proceedings of the 14th international conference on Knowledge-based and intelligent information and engineering systems: Part IV
Interactive churn metrics: socio-technical variants of code churn
ACM SIGSOFT Software Engineering Notes
Influence of confirmation biases of developers on software quality: an empirical study
Software Quality Control
Hi-index | 0.00 |
The field reliability is measured too late for affordablyguiding corrective action to improve the quality of thesoftware. Software developers can benefit from an earlywarning of their reliability while they can still affordablyreact. This early warning can be built from a collection ofinternal metrics. An internal metric, such as the numberof lines of code, is a measure derived from the productitself [15]. An external measure is a measure of a productderived from assessment of the behavior of the system[15]. For example, the number of defects found in test isan external measure. The ISO/IEC standard [15] statesthat "[i]nternal metrics are of little value unless there isevidence that they are related to external quality."Internal metrics can be collected in-process and moreeasily than external metrics. Additionally, internalmetrics have been shown to be useful as early indicatorsof externally-visible product quality [1]. For these earlyindicators to be meaningful, they must be related (in astatistically significant and stable way) to the fieldquality/reliability of the product. The validation of suchmetrics requires the convincing demonstration that (1) themetric measures what it purports to measure and (2) themetric is associated with an important external metric,such as field reliability, maintainability, or fault-proneness[12].Software metrics have been used as indicators ofsoftware quality [1, 19-21, 23] and fault proneness [8-10,24]. There is a growing body of empirical results thatsupports the theoretical validity of the use of higher-orderearly metrics, such as OO metrics [1] defined byChidamber-Kemerer (CK) [6] and the MOOD [5] OOmetric suites as predictors of field quality. However,general validity of these metrics (which are oftenunrelated to the actual operational profile of the product)is still open to criticism [7].