Understanding understandability of conceptual models --- what are we actually talking about?

  • Authors:
  • Constantin Houy;Peter Fettke;Peter Loos

  • Affiliations:
  • Institute for Information Systems (IWi), at the German Research Center for Artificial Intelligence (DFKI), Saarland University, Saarbrücken, Germany;Institute for Information Systems (IWi), at the German Research Center for Artificial Intelligence (DFKI), Saarland University, Saarbrücken, Germany;Institute for Information Systems (IWi), at the German Research Center for Artificial Intelligence (DFKI), Saarland University, Saarbrücken, Germany

  • Venue:
  • ER'12 Proceedings of the 31st international conference on Conceptual Modeling
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Investigating and improving the quality of conceptual models has gained tremendous importance in the past years. In general, model understandability is regarded one of the most important model quality goals and criteria. A considerable amount of empirical studies, especially experiments, have been conducted in order to investigate factors influencing the understandability of conceptual models. However, a thorough review and reconstruction of 42 experiments on conceptual model understandability conducted in this research shows that there is a variety of different understandings and conceptualizations of the term model understandability. As a consequence, this term remains ambiguous, research results on model understandability are hardly comparable and partly imprecise, which shows the necessity of clarification what the conceptual modeling community is actually talking about when the term model understandability is used. In order to overcome this shortcoming, our research classifies the different observed dimensions of model understandability in a reference framework. Moreover, implications of the findings are presented and discussed and some guidelines for future model understandability research are given.