PMothra: scheduling mutants for execution on a hypercube
TAV3 Proceedings of the ACM SIGSOFT '89 third symposium on Software testing, analysis, and verification
Evaluating Software Design Processes by Analyzing Change Data Over Time
IEEE Transactions on Software Engineering
Reliable software and communication: software quality, reliability, and safety
ICSE '93 Proceedings of the 15th international conference on Software Engineering
Toward an effective software reliability evaluation
ICSE '78 Proceedings of the 3rd international conference on Software engineering
Analysis of error remediation expenditures during validation
ICSE '78 Proceedings of the 3rd international conference on Software engineering
The software engineering laboratory: Objectives
SIGCPR '77 Proceedings of the fifteenth annual SIGCPR conference
ICSE '76 Proceedings of the 2nd international conference on Software engineering
Experiments with computer software complexity and reliability
ICSE '82 Proceedings of the 6th international conference on Software engineering
Predicting numbers of errors using software science
Proceedings of the 1981 ACM workshop/symposium on Measurement and evaluation of software quality
Managing the development of reliable software
Proceedings of the international conference on Reliable software
Some experience with automated aids to the design of large-scale reliable software
Proceedings of the international conference on Reliable software
Are current approaches sufficient for measuring software quality?
Proceedings of the software quality assurance workshop on Functional and performance issues
Further validation of an error hypothesis
ACM SIGSOFT Software Engineering Notes
A comparison of program complexity prediction models
ACM SIGSOFT Software Engineering Notes
A survey of run-time and logic errors in a classroom environment
ACM SIGCUE Outlook
Understanding software through empirical reliability analysis
AFIPS '75 Proceedings of the May 19-22, 1975, national computer conference and exposition
A Framework for Predicting Person-Effort on Requirements Changes
Proceedings of the 2006 conference on New Trends in Software Methodologies, Tools and Techniques: Proceedings of the fifth SoMeT_06
Software reliability measurement
Journal of Systems and Software
Software development for reliable software systems
Journal of Systems and Software
Hi-index | 0.00 |
In order to develop some basic information on software errors, an experiment in collecting data on types and frequencies of such errors was conducted at Bell Laboratories. The paper reports the results of this experiment, whose objectives were to: (1) Develop and utilize a set of terms for describing possible types of errors, their nature, and their frequency; (2) Perform a pilot study to determine if data of the type reported in this paper could be collected; (3) Investigate the error density and its correspondence to predictions from previous data reported; (4) Develop data on how resources are expended in debugging. A program of approximately 4K machine instructions (final size) was chosen. Programmers were asked to fill out for each error, in addition to the regular Trouble Report/Correction Report (TR/CR) form, a special Supplementary TR/CR form for the purposes of this experiment. Sixty-three TR/CR and Supplementary forms were completed during the Test and Integration phase of the program. In general, the data collected were felt to be accurate enough for the purposes of the analyses presented. The 63 forms represented a little over 1-1/2% of the total number of machine instructions of the program. (In good agreement with the 1% to 2% range noted in previous studies.) It was discovered that a large percentage of the errors was found by hand processing (without the aid of a computer). This method was found to be much cheaper than techniques involving machine testing.