Software reliability: measurement, prediction, application
Software reliability: measurement, prediction, application
Software Reliability Engineered Testing
Software Reliability Engineered Testing
Operational Profiles in Software-Reliability Engineering
IEEE Software
Design reliability—estimation through statistical analysis of bug discovery data
DAC '98 Proceedings of the 35th annual Design Automation Conference
A study of user acceptance tests
Software Quality Control
What Is Software Testing? And Why Is It So Hard?
IEEE Software
The cost of errors in software development: evidence from industry
Journal of Systems and Software
Software Reliability Engineering in Industry
SAFECOMP '99 Proceedings of the 18th International Conference on Computer Computer Safety, Reliability and Security
Building a System Failure Rate Estimator by Identifying Component Failure Rates
ISSRE '99 Proceedings of the 10th International Symposium on Software Reliability Engineering
Product family testing: a survey
ACM SIGSOFT Software Engineering Notes
The cost behavior of software defects
Decision Support Systems
An experimental study of adaptive testing for software reliability assessment
Journal of Systems and Software
Knowledge based quality-driven architecture design and evaluation
Information and Software Technology
Mathematical modeling of software reliability testing with imperfect debugging
Computers & Mathematics with Applications
Controversy Corner: Improving test efficiency through system test prioritization
Journal of Systems and Software
Software testing-resource allocation with operational profile
Proceedings of the 27th Annual ACM Symposium on Applied Computing
Hi-index | 4.10 |
The testing of software systems is subject to strong conflicting forces. A system must function sufficiently reliably for its application, but it must also reach the market at the same time as its competitors (preferably before) and at a competitive cost. Some systems may be less market-driven than others, but balancing reliability, time of delivery, and cost is always important. One of the most effective ways to do this is to engineer the test process through quantitative planning and tracking. Unfortunately, most software testing is not engineered, and the resulting product may not be as reliable as it should be, and/or it may be too late or inexpensive. Software-reliability-engineered testing combines the use of quantitative reliability objectives and operational profiles (profiles of system use). The operational profile guides developers in testing more realistically, which makes it possible to track the reliability actually being achieved. This article describes SRET in the context of an actual AT&T project. SRET is an AT&T current best practice. Qualification as an AT&T best practice requires use on eight to 10 projects and large benefit/cost ratios. Practitioners have generally found SRET to be unique in offering a standard proven means to engineer and manage testing in a way that lets them increase their confidence in the reliability of the software-based system they developed.