Software errors and complexity: an empirical investigation0
Communications of the ACM
Collecting and categorizing software error data in an industrial environment
Journal of Systems and Software - Special issue on the fifth Minnowbrook workshop on software performance evaluation
Identifying Error-Prone Software An Empirical Study
IEEE Transactions on Software Engineering
Concurrent Fault Detection in Microprogrammed Control Units
IEEE Transactions on Computers
Advances in software inspections
IEEE Transactions on Software Engineering
In-process inspections of workproducts at AT&T
AT&T Technical Journal
Processor Control Flow Monitoring Using Signatured Instruction Streams
IEEE Transactions on Computers
Evaluating Software Engineering Technologies
IEEE Transactions on Software Engineering
A roving monitoring processor for detection of control flow errors in multiple processor systems
Microprocessing and Microprogramming - Fault tolerant computing
A Two-Person Inspection Method to Improve Programming Productivity
IEEE Transactions on Software Engineering
A Watchdog processor for concurrent error detection in multiple processor systems
Microprocessors & Microsystems
An experimental study of fault detection in user requirements documents
ACM Transactions on Software Engineering and Methodology (TOSEM)
The Detection of Fault-Prone Programs
IEEE Transactions on Software Engineering
Experience with Fagan's inspection method
Software—Practice & Experience
Orthogonal Defect Classification-A Concept for In-Process Measurements
IEEE Transactions on Software Engineering - Special issue on software measurement principles, techniques, and environments
An analysis of defect densities found during software inspections
Journal of Systems and Software
An improved inspection technique
Communications of the ACM
Key Lessons in Achieving Widespread Inspection Use
IEEE Software
Evaluating Testing Methods by Delivered Reliability
IEEE Transactions on Software Engineering
Inspections as an up-front quality technique
Handbook of software quality assurance (3rd ed.)
Metrics for object-oriented software projects
Journal of Systems and Software
Programmers use slices when debugging
Communications of the ACM
Software Inspections: An Effective Verification Process
IEEE Software
Lessons from Three Years of Inspection Data
IEEE Software
Distributed, Collaborative Software Inspection
IEEE Software
Learning from Our Mistakes with Defect Causal Analysis
IEEE Software
Concurrent Error Detection Using Watchdog Processors-A Survey
IEEE Transactions on Computers
An Optimal Graph-Construction Approach to Placing Program Signatures for Signature Monitoring
IEEE Transactions on Computers
Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment
IEEE Transactions on Software Engineering
Xception: A Technique for the Experimental Evaluation of Dependability in Modern Computers
IEEE Transactions on Software Engineering
Software Faults in Evolving a Large, Real-Time System: a Case Study
ESEC '93 Proceedings of the 4th European Software Engineering Conference on Software Engineering
Concurrent Erro Detection Using Watchdog Processors
Proceedings of the 5th International GI/ITG/GMA Conference on Fault-Tolerant Computing Systems, Tests, Diagnosis, Fault Treatment
Concurrent Error Detection Using Signature Monitors
Fehlertolerierende Rechensysteme / Fault-Tolerant Computing Systems, Automatisierungssysteme, Methoden, Anwendungen / Automation Systems, Methods, Applications; 4. Internationale GI/ITG/GMA-Fachtagung
Analysis of error processes in computer software
Proceedings of the international conference on Reliable software
An experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study
METRICS '96 Proceedings of the 3rd International Symposium on Software Metrics: From Measurement to Empirical Results
Toward A Quantifiable Definition of Software Faults
ISSRE '02 Proceedings of the 13th International Symposium on Software Reliability Engineering
Program Comprehension Techniques Improve Software Inspections: A Case Study
IWPC '00 Proceedings of the 8th International Workshop on Program Comprehension
A Framework for Assessing Dependability in Distributed Systems with Lightweight Fault Injectors
IPDS '00 Proceedings of the 4th International Computer Performance and Dependability Symposium
Software system defect content prediction from development process and product characteristics
Software system defect content prediction from development process and product characteristics
Experiences with defect prevention
IBM Systems Journal
Specification mutation for test generation and analysis
Specification mutation for test generation and analysis
Design and code inspections to reduce errors in program development
IBM Systems Journal
Detection of control flow errors using signature and checking instructions
ITC'88 Proceedings of the 1988 international conference on Test: new frontiers in testing
Continuous signature monitoring: efficient concurrent-detection of processor control errors
ITC'88 Proceedings of the 1988 international conference on Test: new frontiers in testing
Continuous signature monitoring: low-cost concurrent detection of processor control errors
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Predicting software development errors using software complexity metrics
IEEE Journal on Selected Areas in Communications
Concurrent software fault detection
IEEE Transactions on Software Engineering
An analysis of errors and their causes in system programs
IEEE Transactions on Software Engineering
Hi-index | 0.00 |
An important aspect of developing models relating the number and type of faults in a software system to a set of structural measurement is defining what constitutes a fault. By definition, a fault is a structural imperfection in a software system that may lead to the system's eventually failing. A measurable and precise definition of what faults are makes it possible to accurately identify and count them, which in turn allows the formulation of models relating fault counts and types to other measurable attributes of a software system. Unfortunately, the most widely used definitions are not measurable-there is no guarantee that two different individuals looking at the same set of failure reports and the same set of fault definitions will count the same number of underlying faults. The incomplete and ambiguous nature of current fault definitions adds a noise component to the inputs used in modeling fault content. If this noise component is sufficiently large, any attempt to develop a fault model will produce invalid results. In this paper, we base our recognition and enumeration of software faults on the grammar of the language of the software system. By tokenizing the differences between a version of the system exhibiting a particular failure behavior, and the version in which changes were made to eliminate that behavior, we are able to unambiguously count the number of faults associated with that failure. With modern configuration management tools, the identification and counting of software faults can be automated.