Searching the Boundaries of a Modeling Space to Test Metamodels

  • Authors:
  • Juan Jose Cadavid Gomez;Benoit Baudry;Houari Sahraoui

  • Affiliations:
  • -;-;-

  • Venue:
  • ICST '12 Proceedings of the 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Model-driven software development relies on metamodels to formally capture modeling spaces. Metamodels specify concepts and relationships between them in order to represent either a specific business domain model or the input and output domains for operations on models (e.g., model refinement). In all cases, a metamodel is a finite description of a possibly infinite set of models, i.e. the set of all models which structure conforms to the description specified in the metamodel. However, there is currently no systematic method to test that a metamodel captures all the correct models of the domain and no more. In this paper, we focus on the automatic selection of a set of models in the modeling space captured by a metamodel. The selected set should both cover as many representative situations as possible and be kept small as possible for further manual analysis. We use simulated annealing to select a set of models that satisfies those two objectives and report on results using two metamodels from two different domains.