UMLDiff: an algorithm for object-oriented design differencing
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
Odyssey-VCS: a flexible version control system for UML model elements
Proceedings of the 12th international workshop on Software configuration management
Different models for model matching: An analysis of approaches to support model differencing
CVSM '09 Proceedings of the 2009 ICSE Workshop on Comparison and Versioning of Software Models
CVSM '09 Proceedings of the 2009 ICSE Workshop on Comparison and Versioning of Software Models
Managing Model Adaptation by Precise Detection of Metamodel Changes
ECMDA-FA '09 Proceedings of the 5th European Conference on Model Driven Architecture - Foundations and Applications
Uniform Random Generation of Huge Metamodel Instances
ECMDA-FA '09 Proceedings of the 5th European Conference on Model Driven Architecture - Foundations and Applications
Establishing Correspondences between Models with the Epsilon Comparison Language
ECMDA-FA '09 Proceedings of the 5th European Conference on Model Driven Architecture - Foundations and Applications
Fine-grained metamodel-assisted model comparison
Proceedings of the 1st International Workshop on Model Comparison in Practice
Generic tool for visualization of model differences
Proceedings of the 1st International Workshop on Model Comparison in Practice
An extensive catalog of operators for the coupled evolution of metamodels and models
SLE'10 Proceedings of the Third international conference on Software language engineering
Hi-index | 0.00 |
Model comparison is an important aspect of model driven software engineering. In particular, exploring the evolution of a model would be impossible without means for comparing different versions of that model. However, the techniques and tools for model comparison are still being perfected for practical application. Moreover, there exist no systematic methods and no controlled benchmarks that could be used for assessing the quality of tools for model comparison. In this paper, we describe a systematic method for assessing the quality of model-comparison tools, and we present a data set to be used for controlled assessment experiments. Additionally, we use our method, and the specified data, to asses the quality of two model-comparison tools, namely EMFCompare and RCVDiff. The results of the experiments show that, in generic cases, both tools exhibit similar performance, and that both tools are of similar quality, though there are some notable difference in the details. The defined method, the selected dataset, and the results obtained by assessing the two mentioned tools, constitute a benchmark for model-comparison tools.