Assessing the quality of model-comparison tools: a method and a benchmark data set

  • Authors:
  • Mark van den Brand;Albert Hofkamp;Tom Verhoeff;Zvezdan Protić

  • Affiliations:
  • Eindhoven University of Technology, Den Dolech, AZ Eindhoven;Eindhoven University of Technology, MB Eindhoven;Eindhoven University of Technology, Den Dolech, AZ Eindhoven;Eindhoven University of Technology, Den Dolech, AZ Eindhoven

  • Venue:
  • Proceedings of the 2nd International Workshop on Model Comparison in Practice
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Model comparison is an important aspect of model driven software engineering. In particular, exploring the evolution of a model would be impossible without means for comparing different versions of that model. However, the techniques and tools for model comparison are still being perfected for practical application. Moreover, there exist no systematic methods and no controlled benchmarks that could be used for assessing the quality of tools for model comparison. In this paper, we describe a systematic method for assessing the quality of model-comparison tools, and we present a data set to be used for controlled assessment experiments. Additionally, we use our method, and the specified data, to asses the quality of two model-comparison tools, namely EMFCompare and RCVDiff. The results of the experiments show that, in generic cases, both tools exhibit similar performance, and that both tools are of similar quality, though there are some notable difference in the details. The defined method, the selected dataset, and the results obtained by assessing the two mentioned tools, constitute a benchmark for model-comparison tools.