SoftArch/MTE: Generating Distributed System Test-Beds from High-Level Software Architecture Descriptions

  • Authors:
  • John Grundy;Yuhong Cai;Anna Liu

  • Affiliations:
  • Department of Electrical and Computer Engineering and Department of Computer Science, University of Auckland, Private Bag 92019, Auckland, New Zealand. john-g@cs.auckland.ac.nz;Department of Computer Science, University of Auckland, Private Bag 92019, Auckland, New Zealand;Software Architectures and Component Technologies, CSIRO Mathematical and Information Sciences, Locked Bag 17, North Ryde, NSW 1670, Sydney, Australia. Anna.Liu@cmis.csiro.au

  • Venue:
  • Automated Software Engineering
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most distributed system specifications have performance benchmark requirements, for example the number of particular kinds of transactions per second required to be supported by the system. However, determining the likely eventual performance of complex distributed system architectures during their development is very challenging. We describe SoftArch/MTE, a software tool that allows software architects to sketch an outline of their proposed system architecture at a high level of abstraction. These descriptions include client requests, servers, server objects and object services, database servers and tables, and particular choices of middleware and database technologies. A fully-working implementation of this system is then automatically generated from this high-level architectural description. This implementation is deployed on multiple client and server machines and performance tests are then automatically run for this generated code. Performance test results are recorded, sent back to the SoftArch/MTE environment and are then displayed to the architect using graphs or by annotating the original high-level architectural diagrams. Architects may change performance parameters and architecture characteristics, comparing multiple test run results to determine the most suitable abstractions to refine to detailed designs for actual system implementation. Further tests may be run on refined architecture descriptions at any stage during system development. We demonstrate the utility of our approach and prototype tool, and the accuracy of our generated performance test-beds, for validating architectural choices during early system development.