Hi-index | 0.00 |
Benchmarking self-adaptive software systems calls for a new model that takes into account a distinctive characteristic of such systems: alterations over time (i.e., self-achieved modifications or adjustments triggered by changes in the external or internal contexts of the system). Changes are thus a fundamental component of a resilience benchmark, raising an intrinsic research problem: how to identify and select the most realistic and relevant (sequences of) changes to be included in the benchmarking procedure. The problem is that defining a representative change load would require access to a large amount of field data, which is not available for most systems. In this paper we propose an approach based on risk analysis to tackle this key issue, debating its effectiveness and usability with a simple case study. The procedure, that combines field data with expert knowledge and experimental data, allows moving from the identification of the generic goals of systems in the benchmarking domain to the identification of the most relevant change scenarios (based on probability and impact) that may prevent those systems from achieving their goals.