A case study in applying a systematic method for COTS selection
Proceedings of the 18th international conference on Software engineering
Evaluating the Scalability of Distributed Systems
IEEE Transactions on Parallel and Distributed Systems
Performance and scalability of EJB applications
OOPSLA '02 Proceedings of the 17th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
Identifying Quality-Requirement Conflicts
IEEE Software
Qualitative and Quantitative Reliability Assessment
IEEE Software
A survey on software architecture analysis methods
IEEE Transactions on Software Engineering
The Anatomy of the Grid: Enabling Scalable Virtual Organizations
Euro-Par '01 Proceedings of the 7th International Euro-Par Conference Manchester on Parallel Processing
Performance Engineering Evaluation of CORBA-based Distributed Systems with SPE*ED
TOOLS '98 Proceedings of the 10th International Conference on Computer Performance Evaluation: Modelling Techniques and Tools
Software Architecture in Practice
Software Architecture in Practice
Availability Measurement and Modeling for An Application Server
DSN '04 Proceedings of the 2004 International Conference on Dependable Systems and Networks
Comparison of Scenario-Based Software Architecture Evaluation Methods
APSEC '04 Proceedings of the 11th Asia-Pacific Software Engineering Conference
Middleware Architecture Evaluation for Dependable Self-managing Systems
QoSA '08 Proceedings of the 4th International Conference on Quality of Software-Architectures: Models and Architectures
Towards software performance engineering for multicore and manycore systems
ACM SIGMETRICS Performance Evaluation Review
Hi-index | 0.00 |
Middleware architectures play a crucial role in determining the overall quality of many distributed applications. Systematic evaluation methods for middleware architectures are therefore important to thoroughly assess the impact of design decisions on quality goals. This paper presents MEMS, a scenario-based evaluation approach. MEMS provides a principled way of evaluating middleware architectures by leveraging generic qualitative and quantitative evaluation techniques such as prototyping, testing, rating, and analysis. It measures middleware architectures by rating multiple quality attributes, and the outputs aid the determination of the suitability of alternative middleware architectures to meet an application’s quality goals. MEMS also benefits middleware development by uncovering potential problems at early stage, making it cheaper and quicker to fix design problems. The paper describes a case study to evaluate the security architecture of grid middleware architectures for managing secure conversations and access control. The results demonstrate the practical utility of MEMS for evaluating middleware architectures for multiple quality attributes.