The complexity of data flow criteria for test data selection
Information Processing Letters
Software errors and complexity: an empirical investigation0
Communications of the ACM
Selecting Software Test Data Using Data Flow Information
IEEE Transactions on Software Engineering
Comparing test data adequacy criteria
ACM SIGSOFT Software Engineering Notes
Partition Testing Does Not Inspire Confidence (Program Testing)
IEEE Transactions on Software Engineering
Analyzing Partition Testing Strategies
IEEE Transactions on Software Engineering
Comparison of program testing strategies
TAV4 Proceedings of the symposium on Testing, analysis, and verification
Markov analysis of software specifications
ACM Transactions on Software Engineering and Methodology (TOSEM)
Using failure cost information for testing and reliability assessment
ACM Transactions on Software Engineering and Methodology (TOSEM)
Deriving workloads for performance testing
Software—Practice & Experience
Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria
ICSE '94 Proceedings of the 16th international conference on Software engineering
Evaluating Testing Methods by Delivered Reliability
IEEE Transactions on Software Engineering
Performance testing of software systems
Proceedings of the 1st international workshop on Software and performance
Predicting Fault Incidence Using Software Change History
IEEE Transactions on Software Engineering
Experience with Performance Testing of Software Systems: Issues, an Approach, and Case Study
IEEE Transactions on Software Engineering
The distribution of faults in a large industrial software system
ISSTA '02 Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis
Software performance testing based on workload characterization
WOSP '02 Proceedings of the 3rd international workshop on Software and performance
Operational Profiles in Software-Reliability Engineering
IEEE Software
Reexamining the Fault Density-Component Size Connection
IEEE Software
A Formal Analysis of the Fault-Detecting Ability of Testing Methods
IEEE Transactions on Software Engineering
An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing
IEEE Transactions on Software Engineering
Provable Improvements on Branch Testing
IEEE Transactions on Software Engineering
The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software
IEEE Transactions on Software Engineering
Quantitative Analysis of Faults and Failures in a Complex Software System
IEEE Transactions on Software Engineering
A Metric to Predict Software Scalability
METRICS '02 Proceedings of the 8th International Symposium on Software Metrics
NextGen eXtreme porting: structured by automation
Proceedings of the 2005 ACM symposium on Applied computing
Automated gui testing guided by usage profiles
Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering
Execution path profiling for OS device drivers: viability and methodology
ISAS'08 Proceedings of the 5th international conference on Service availability
Profiling the operational behavior of OS device drivers
Empirical Software Engineering
Improving robustness testing of COTS OS extensions
ISAS'06 Proceedings of the Third international conference on Service Availability
Hi-index | 0.00 |
Although most software test data adequacy criteria proposed as a way to assess the progress of testing rely on the coverage of the code, the coverage of the specification, or the percentage of the input domain that has been exercised, none of these are really good indicators of how thoroughly the software has been tested. Instead we propose that the assessment be based on the percentage of the probability mass associated with the test suite. This requires that data be collected to determine how the software is used in practice over a period of time, so that this can be properly assessed. In that way a project is able to accurately determine how testing is progressing.