Software testing based on formal specifications: a theory and a tool
Software Engineering Journal
Applying design of experiments to software testing: experience report
ICSE '97 Proceedings of the 19th international conference on Software engineering
The AETG System: An Approach to Testing Based on Combinatorial Design
IEEE Transactions on Software Engineering
Model-based testing in practice
Proceedings of the 21st international conference on Software engineering
A practical guide to testing object-oriented software
A practical guide to testing object-oriented software
Software Engineering: A Practitioner's Approach
Software Engineering: A Practitioner's Approach
An Investigation of the Applicability of Design of Experiments to Software Testing
SEW '02 Proceedings of the 27th Annual NASA Goddard Software Engineering Workshop (SEW-27'02)
Software Testing, Verification & Reliability
RBOSTP: risk-based optimization of software testing process part 1
ICCOMP'05 Proceedings of the 9th WSEAS International Conference on Computers
RBOSTP: risk-based optimization of software testing process part 2
ICCOMP'05 Proceedings of the 9th WSEAS International Conference on Computers
Hi-index | 0.00 |
The primary purpose of Software Testing Process and Evaluation (STP& E) is to reduce risk. All testing provides insight and helps identify "unknown-unknowns". This paper describes STP with Assured Confidence techniques. The uncertainty problem is an important issue in the computer industry today and testing is still the main technique for quality assurance. There is a need to ensure that the software is reasonably safe from severe faults after testing. When faced with financial or a schedule constraints, testing is usually cut horizontally attempting to cover as many different test requirements at the expense of depth. We have reached a point where we must test smarter and apply Statistical-Risk-Based Test with Assured Confidence (SRBTAC) management procedure. We need to pick the right assessment tools to make vertical cuts in our test strategies. Any given test has a low probability of detecting a problem, but all the possibilities create a too-high probability of defects. To define the smallest number of tests you need to cover "enough" tests applying reduction hypotheses such as: uniformity, regularity, induction, deduction, analogy etc. and determine how many combinations you require for "enough" tests. If you analyze the inputs, there exists a set of combinations that defines the outputs, so you test ONLY those combinations of inputs. Approaches to software testing based on methods from the field of design of experiments have been advocated as a means of providing high coverage with minimal number of test cases at relatively low cost. These techniques can be easily embedded in an existing testing STP with minimal changes and extra effort.