Applied regression analysis and other multivariable methods
Applied regression analysis and other multivariable methods
Regression modelling of software quality: empirical investigation
Journal of Electronic Materials
A Practical Approach to Programming With Assertions
IEEE Transactions on Software Engineering
Comments on 'A Metrics Suite for Object Oriented Design'
IEEE Transactions on Software Engineering
A Validation of Object-Oriented Design Metrics as Quality Indicators
IEEE Transactions on Software Engineering
An Evaluation of the MOOD Set of Object-Oriented Software Metrics
IEEE Transactions on Software Engineering
Managerial Use of Metrics for Object-Oriented Software: An Exploratory Analysis
IEEE Transactions on Software Engineering
Investigating quality factors in object-oriented designs: an industrial case study
Proceedings of the 21st international conference on Software engineering
Extreme programming explained: embrace change
Extreme programming explained: embrace change
Exploring the relationship between design measures and software quality in object-oriented systems
Journal of Systems and Software
The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics
IEEE Transactions on Software Engineering
Software Engineering Economics
Software Engineering Economics
Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering
A Metrics Suite for Object Oriented Design
IEEE Transactions on Software Engineering
A Comparative Study of Predictive Models for Program Changes During System Testing and Maintenance
ICSM '93 Proceedings of the Conference on Software Maintenance
IEEE Transactions on Software Engineering
Fault Prediction Modeling for Software Quality Estimation: Comparing Commonly Used Techniques
Empirical Software Engineering
An Empirical Study on Object-Oriented Metrics
METRICS '99 Proceedings of the 6th International Symposium on Software Metrics
Some issues in multi-phase software reliability modeling
CASCON '93 Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative research: software engineering - Volume 1
"Good enough" software reliability estimation plug-in for Eclipse
eclipse '03 Proceedings of the 2003 OOPSLA workshop on eclipse technology eXchange
A software testing and reliability early warning (strew) metric suite
A software testing and reliability early warning (strew) metric suite
Using In-Process Testing Metrics to Estimate Post-Release Field Quality
ISSRE '07 Proceedings of the The 18th IEEE International Symposium on Software Reliability
WoSQ '07 Proceedings of the 5th International Workshop on Software Quality
A software fault tree key node metric
Journal of Systems and Software
Ontology-supported quality assurance for component-based systems configuration
Proceedings of the 6th international workshop on Software quality
Dependability metrics
Exploratory study of a UML metric for fault prediction
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
Comparative study on applicability of two novel effort estimation models in web projects
ECC'10 Proceedings of the 4th conference on European computing conference
Two novel effort estimation models based on quality metrics in web projects
WSEAS Transactions on Information Science and Applications
EvoJava: a tool for measuring evolving software
ACSC '11 Proceedings of the Thirty-Fourth Australasian Computer Science Conference - Volume 113
Hi-index | 0.00 |
In industrial practice, information on post-release field quality of a product tends to become available too late in the software development process to affordably guide corrective actions. An important step towards remediation of this problem of late information lies in the ability to provide an early estimation of software post-release field quality. This paper presents the use of a suite of in-process metrics that leverages the software testing effort to provide (1) an estimation of potential software field quality in early software development phases, and (2) the identification of low quality software programs. A controlled case study conducted at North Carolina State University provides initial indication that our approach is effective for making an early assessment of post-release field quality.