Cleanroom Software Development: An Empirical Evaluation
IEEE Transactions on Software Engineering
Partition Testing Does Not Inspire Confidence (Program Testing)
IEEE Transactions on Software Engineering
Analyzing Partition Testing Strategies
IEEE Transactions on Software Engineering
Validation of ultrahigh dependability for software-based systems
Communications of the ACM
Software Process Evolution at the SEL
IEEE Software
On the Relationship Between Partition and Random Testing
IEEE Transactions on Software Engineering
On the Expected Number of Failures Detected by Subdomain Testing and Random Testing
IEEE Transactions on Software Engineering
Observations on industrial practice using formal methods
ICSE '93 Proceedings of the 15th international conference on Software Engineering
Coverage measurement experience during function test
ICSE '93 Proceedings of the 15th international conference on Software Engineering
Reliable software and communication: software quality, reliability, and safety
ICSE '93 Proceedings of the 15th international conference on Software Engineering
Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria
ICSE '94 Proceedings of the 16th international conference on Software engineering
Engineering Software Under Statistical Quality Control
IEEE Software
Operational Profiles in Software-Reliability Engineering
IEEE Software
The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software
IEEE Transactions on Software Engineering
On some reliability estimation problems in random and partition testing
IEEE Transactions on Software Engineering
An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing
IEEE Transactions on Software Engineering
Provable Improvements on Branch Testing
IEEE Transactions on Software Engineering
On subdomains: Testing, profiles, and components
Proceedings of the 2000 ACM SIGSOFT international symposium on Software testing and analysis
Comparison of delivered reliability of branch, data flow and operational testing: A case study
Proceedings of the 2000 ACM SIGSOFT international symposium on Software testing and analysis
Modular Operational Test Plans for Inferences on Software Reliability Based on a Markov Model
IEEE Transactions on Software Engineering
Bayesian Graphical Models for Software Testing
IEEE Transactions on Software Engineering
An Empirical Method for Selecting Software Reliability Growth Models
Empirical Software Engineering
Quantitative Analysis of Development Defects to Guide Testing: A Case Study
Software Quality Control
Software Component Certification
Computer
Correction to: Evaluating Testing Methods by Delivered Reliability
IEEE Transactions on Software Engineering
Comparing Partition and Random Testing via Majorization and Schur Functions
IEEE Transactions on Software Engineering
The Reliability of Diverse Systems: A Contribution Using Modelling of the Fault Creation Process
DSN '01 Proceedings of the 2001 International Conference on Dependable Systems and Networks (formerly: FTCS)
Improving test suites via operational abstraction
Proceedings of the 25th International Conference on Software Engineering
Quality assurance under the open source development model
Journal of Systems and Software
Using operational distributions to judge testing progress
Proceedings of the 2003 ACM symposium on Applied computing
Using Simulation to Empirically Investigate Test Coverage Criteria Based on Statechart
Proceedings of the 26th International Conference on Software Engineering
On the analytical comparison of testing techniques
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
Assessment methodologies for public contractors
ACM SIGSOFT Software Engineering Notes
Evaluation of debug-testing efficiency by duplication of the detected fault and delay time of repair
Information Sciences—Informatics and Computer Science: An International Journal
Lattice-based adaptive random testing
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
Software faults: a quantifiable definition
Advances in Engineering Software
Tool support for randomized unit testing
Proceedings of the 1st international workshop on Random testing
Journal of Systems and Software - Special issue: Selected papers from the 11th Asia Pacific software engineering conference (APSEC 2004)
Adaptive random testing through iterative partitioning revisited
Proceedings of the 3rd international workshop on Software quality assurance
An objective comparison of the cost effectiveness of three testing methods
Information and Software Technology
Fault Tolerance via Diversity for Off-the-Shelf Products: A Study with SQL Database Servers
IEEE Transactions on Dependable and Secure Computing
An upper bound on software testing effectiveness
ACM Transactions on Software Engineering and Methodology (TOSEM)
An experimental study of adaptive testing for software reliability assessment
Journal of Systems and Software
Code based analysis for object-oriented systems
Journal of Computer Science and Technology
Software faults: A quantifiable definition
Advances in Engineering Software
Software testing research and practice
ASM'03 Proceedings of the abstract state machines 10th international conference on Advances in theory and practice
Comparing the effectiveness of testing techniques
Formal methods and testing
Adaptive random testing by bisection with restriction
ICFEM'05 Proceedings of the 7th international conference on Formal Methods and Software Engineering
Jartege: a tool for random generation of unit tests for java classes
QoSA'05 Proceedings of the First international conference on Quality of Software Architectures and Software Quality, and Proceedings of the Second International conference on Software Quality
Using scenarios to predict the reliability of concurrent component-based software systems
FASE'05 Proceedings of the 8th international conference, held as part of the joint European Conference on Theory and Practice of Software conference on Fundamental Approaches to Software Engineering
Adaptive random testing by bisection and localization
FATES'05 Proceedings of the 5th international conference on Formal Approaches to Software Testing
Accounting for defect characteristics in evaluations of testing techniques
ACM Transactions on Software Engineering and Methodology (TOSEM)
Enhancing software reliability estimates using modified adaptive testing
Information and Software Technology
A learning strategy for software testing optimization based on dynamic programming
Proceedings of the Fourth Asia-Pacific Symposium on Internetware
Testing techniques selection based on ODC fault types and software metrics
Journal of Systems and Software
Hi-index | 0.01 |
There are two main goals in testing software: 1) to achieve adequate quality (debug testing); the objective is to probe the software for defects so that these can be removed and 2) to assess existing quality (operational testing); the objective is to gain confidence that the software is reliable. The names are arbitrary, and most testing techniques address both goals to some degree. However, debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought, without any real proof, to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer theoretically the question of how to attain program reliability: Is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing, uncovering defects by accident, so to speak? There is no simple answer, of course. Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The "better" method delivers higher reliability after all test failures have been eliminated. This comparison extends previous work, where the measure was the probability of detecting a failure. Revealing special cases are exhibited in which each kind of testing is superior. Preliminary analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons.