Orthogonal Defect Classification-A Concept for In-Process Measurements
IEEE Transactions on Software Engineering - Special issue on software measurement principles, techniques, and environments
DSSA (Domain-Specific Software Architecture): pedagogical example
ACM SIGSOFT Software Engineering Notes
The Unified Modeling Language user guide
The Unified Modeling Language user guide
The Rational Unified Process: an introduction
The Rational Unified Process: an introduction
Software technology maturation
ICSE '85 Proceedings of the 8th international conference on Software engineering
Acme: architectural description of component-based systems
Foundations of component-based systems
Anchoring the Software Process
IEEE Software
Understanding Tradeoffs among Different Architectural Modeling Approaches
WICSA '04 Proceedings of the Fourth Working IEEE/IFIP Conference on Software Architecture
Mae---a system model and environment for managing architectural evolution
ACM Transactions on Software Engineering and Methodology (TOSEM)
Using Empirical Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience
ISESE '04 Proceedings of the 2004 International Symposium on Empirical Software Engineering
Estimating software component reliability by leveraging architectural models
Proceedings of the 28th international conference on Software engineering
Experimenting with software testbeds for evaluating new technologies
Empirical Software Engineering
Design, deployment, and use of the DETER testbed
DETER Proceedings of the DETER Community Workshop on Cyber Security Experimentation and Test on DETER Community Workshop on Cyber Security Experimentation and Test 2007
Hi-index | 0.00 |
A major problem in empirical software engineering is to determine or ensure comparability across multiple sources of empirical data. This paper summarizes experiences in developing and applying a software engineering technology testbed. The testbed was designed to ensure comparability of empirical data used to evaluate alternative software engineering technologies, and to accelerate the technology maturation and transition into project use. The requirements for such software engineering technology testbeds include not only the specifications and code, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments. The requirements and architecture to build a particular software engineering technology testbed to help NASA evaluate its investments in software dependability research and technology have been developed and applied to evaluate a wide range of technologies. The technologies evaluated came from the fields of architecture, testing, state-model checking, and operational envelopes. This paper will present for the first time the requirements and architecture of the software engineering technology testbed. The results of the technology evaluations will be analyzed from a point of view of how researchers benefitted from using the SETT. The researchers just reported how their technology performed in their original findings. The testbed evaluation showed (1) that certain technologies were complementary and cost-effective to apply; (2) that the testbed was cost-effective to use by researchers within a well-specified domain of applicability; (3) that collaboration in testbed use by researchers and the practitioners resulted comparable empirical data and in actions to accelerate technology maturity and transition into project use, as shown in the AcmeStudio evaluation; and (4) that the software engineering technology testbed's requirements and architecture were suitable for evaluating technologies and accelerating their maturation and transition into project use.