Extensible and automated model-evaluations with INProVE

  • Authors:
  • Sören Kemmann;Thomas Kuhn;Mario Trapp

  • Affiliations:
  • Fraunhofer IESE;Fraunhofer IESE;Fraunhofer IESE

  • Venue:
  • SAM'10 Proceedings of the 6th international conference on System analysis and modeling: about models
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Model-based development is gaining more and more importance for the creation of software-intensive embedded systems. One important aspect of software models is model quality. This does not imply functional correctness, but non-functional properties, such as maintainability, scalability, extensibility. Lots of effort was put into development of metrics for control flow models. In the embedded systems domain however, domain specific- and data flow languages are commonly applied for model creation. For these languages, existing metrics are not applicable. Domain and project specific quality metrics therefore are informally defined; tracking conformance to these metrics is a manual and effort consuming task. To resolve this situation, we developed INProVE. INProVE is a model-based framework that supports definition of quality metrics in an intuitive, yet formal notion. It provides automated evaluation of design models through its indicators. Applied in different industry projects to complex models, INProVE has proven its applicability for quality assessment of data flow-oriented design models not only in research, but also in practice.