Fine-tuning model transformation: change propagation in context of consistency, completeness, and human guidance

  • Authors:
  • Alexander Egyed;Andreas Demuth;Achraf Ghabi;Roberto Lopez-Herrejon;Patrick Mäder;Alexander Nöhrer;Alexander Reder

  • Affiliations:
  • Institute for Systems Engineering and Automation, Johannes Kepler University, Linz, Austria;Institute for Systems Engineering and Automation, Johannes Kepler University, Linz, Austria;Institute for Systems Engineering and Automation, Johannes Kepler University, Linz, Austria;Institute for Systems Engineering and Automation, Johannes Kepler University, Linz, Austria;Institute for Systems Engineering and Automation, Johannes Kepler University, Linz, Austria;Institute for Systems Engineering and Automation, Johannes Kepler University, Linz, Austria;Institute for Systems Engineering and Automation, Johannes Kepler University, Linz, Austria

  • Venue:
  • ICMT'11 Proceedings of the 4th international conference on Theory and practice of model transformations
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important role of model transformation is in exchanging modeling information among diverse modeling languages. However, while a model is typically constrained by other models, additional information is often necessary to transform said models entirely. This dilemma poses unique challenges for the model transformation community. To counter this problem we require a smart transformation assistant. Such an assistant should be able to combine information from diverse models, react incrementally to enable transformation as information becomes available, and accept human guidance - from direct queries to understanding the designer(s) intentions. Such an assistant should embrace variability to explicitly express and constrain uncertainties during transformation - for example, by transforming alternatives (if no unique transformation result is computable) and constraining these alternatives during subsequent modeling. We would want this smart assistant to optimize how it seeks guidance, perhaps by asking the most beneficial questions first while avoiding asking questions at inappropriate times. Finally, we would want to ensure that such an assistant produces correct transformation results despite the presence of inconsistencies. Inconsistencies are often tolerated yet we have to understand that their presence may inadvertently trigger erroneous transformations, thus requiring backtracking and/or sandboxing of transformation results. This paper explores these and other issues concerning model transformation and sketches challenges and opportunities.