Quantitative system performance: computer system analysis using queueing network models
Quantitative system performance: computer system analysis using queueing network models
Capacity planning and performance modeling: from mainframes to client-server systems
Capacity planning and performance modeling: from mainframes to client-server systems
ESP '97 Papers presented at the seventh workshop on Empirical studies of programmers
Experimentation in software engineering: an introduction
Experimentation in software engineering: an introduction
A Classification and Comparison Framework for Software Architecture Description Languages
IEEE Transactions on Software Engineering
Performance solutions: a practical guide to creating responsive, scalable software
Performance solutions: a practical guide to creating responsive, scalable software
Process algebra for performance evaluation
Theoretical Computer Science
Scaling for E Business: Technologies, Models, Performance, and Capacity Planning
Scaling for E Business: Technologies, Models, Performance, and Capacity Planning
PASASM: a method for the performance assessment of software architectures
WOSP '02 Proceedings of the 3rd international workshop on Software and performance
IEEE Transactions on Software Engineering
A Controlled Experiment in Maintenance Comparing Design Patterns to Simpler Solutions
IEEE Transactions on Software Engineering
Proceedings of the 4th international workshop on Software and performance
WOSP '04 Fourth International Workshop on Software and Performance 2004
Experimenting different software architectures performance techniques: a case study
WOSP '04 Proceedings of the 4th international workshop on Software and performance
Model-Based Performance Prediction in Software Development: A Survey
IEEE Transactions on Software Engineering
Predicting Real-Time Properties of Component Assemblies: A Scenario-Simulation Approach
EUROMICRO '04 Proceedings of the 30th EUROMICRO Conference
The Journal of Supercomputing
Stability of Feature Selection Algorithms
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Model-Based performance prediction with the palladio component model
WOSP '07 Proceedings of the 6th international workshop on Software and performance
Exploring performance trade-offs of a JPEG decoder using the deepcompass framework
WOSP '07 Proceedings of the 6th international workshop on Software and performance
Performance Modeling and Evaluation of Distributed Component-Based Systems Using Queueing Petri Nets
IEEE Transactions on Software Engineering
An Empirical Investigation of the Applicability of a Component-Based Performance Prediction Method
EPEW '08 Proceedings of the 5th European Performance Engineering Workshop on Computer Performance Engineering
The Palladio component model for model-driven performance prediction
Journal of Systems and Software
CBSE '08 Proceedings of the 11th International Symposium on Component-Based Software Engineering
Performance Prediction for Black-Box Components Using Reengineered Parametric Behaviour Models
CBSE '08 Proceedings of the 11th International Symposium on Component-Based Software Engineering
Enhanced Modeling and Solution of Layered Queueing Networks
IEEE Transactions on Software Engineering
Validation of predictions with measurements
Dependability metrics
Performance evaluation of component-based software systems: A survey
Performance Evaluation
Performance modeling in industry: a case study on storage virtualization
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
Empirical evaluation of model-based performance prediction methods in software development
QoSA'05 Proceedings of the First international conference on Quality of Software Architectures and Software Quality, and Proceedings of the Second International conference on Software Quality
Performance prediction of component-based systems
Proceedings of the 2004 international conference on Architecting Systems with Trustworthy Components
PerOpteryx: automated application of tactics in multi-objective software architecture optimization
Proceedings of the joint ACM SIGSOFT conference -- QoSA and ACM SIGSOFT symposium -- ISARCS on Quality of software architectures -- QoSA and architecting critical systems -- ISARCS
Improving performance predictions by accounting for the accuracy of composed performance models
Proceedings of the 8th international ACM SIGSOFT conference on Quality of Software Architectures
Model transformations in non-functional analysis
SFM'12 Proceedings of the 12th international conference on Formal Methods for the Design of Computer, Communication, and Software Systems: formal methods for model-driven engineering
A mixed-method approach for the empirical evaluation of the issue-based variability modeling
Journal of Systems and Software
Performance and reliability prediction for evolving service-oriented software systems
Empirical Software Engineering
Hi-index | 0.00 |
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.