Methodology for Validating Software Metrics
IEEE Transactions on Software Engineering
How reuse influences productivity in object-oriented systems
Communications of the ACM
Validation of an Approach for Improving Existing Measurement Frameworks
IEEE Transactions on Software Engineering
Predicting Fault Incidence Using Software Change History
IEEE Transactions on Software Engineering
Software Metrics: A Rigorous and Practical Approach
Software Metrics: A Rigorous and Practical Approach
Software Quality: The Elusive Target
IEEE Software
Test and Development Process Retrospective - A Case Study using ODC Triggers
DSN '02 Proceedings of the 2002 International Conference on Dependable Systems and Networks
Common Concept Apparatus within Corrective Software Maintenance
ICSM '99 Proceedings of the IEEE International Conference on Software Maintenance
Generalizing Generalizability in Information Systems Research
Information Systems Research
An Empirical Study of Software Reuse vs. Defect-Density and Stability
Proceedings of the 26th International Conference on Software Engineering
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
Enabling Reuse-Based Software Development of Large-Scale Systems
IEEE Transactions on Software Engineering
Results and experiences from an empirical study of fault reports in industrial projects
PROFES'06 Proceedings of the 7th international conference on Product-Focused Software Process Improvement
Quality, productivity and economic benefits of software reuse: a review of industrial studies
Empirical Software Engineering
An empirical investigation of software reuse benefits in a large telecom product
ACM Transactions on Software Engineering and Methodology (TOSEM)
Establishing a knowledge base for problem management, part II
SE '08 Proceedings of the IASTED International Conference on Software Engineering
Component testing is not enough: a study of software faults in telecom middleware
TestCom'07/FATES'07 Proceedings of the 19th IFIP TC6/WG6.1 international conference, and 7th international conference on Testing of Software and Communicating Systems
EuroSPI'07 Proceedings of the 14th European conference on Software Process Improvement
Hi-index | 0.00 |
In this paper, we describe our experience with using problem reports from industry for quality assessment. The non-uniform terminology used in problem reports and validity concerns have been subject of earlier research but are far from settled. To distinguish between terms such as defects or errors, we propose to answer three questions on the scope of a study related to what (problem appearance or its cause), where (problems related to software; executable or not; or system), and when (problems recorded in all development life cycles or some of them). Challenges in defining research questions and metrics, collecting and analyzing data, generalizing the results and reporting them are discussed. Ambiguity in defining problem report fields and missing, inconsistent or wrong data threatens the value of collected evidence. Some of these concerns could be settled by answering some basic questions related to the problem reporting fields and improving data collection routines and tools.