A Framework for Empirical Evaluation of Model Comprehensibility

  • Authors:
  • Jorge Aranda;Neil Ernst;Jennifer Horkoff;Steve Easterbrook

  • Affiliations:
  • University of Toronto, Canada;University of Toronto, Canada;University of Toronto, Canada;University of Toronto, Canada

  • Venue:
  • MISE '07 Proceedings of the International Workshop on Modeling in Software Engineering
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

If designers of modelling languages want their creations to be used in real software projects, the communication qualities of their languages need to be evaluated, and their proposals must evolve as a result of these evaluations. A key quality of communication artifacts is their comprehensibility. We present a flexible framework to evaluate the comprehensibility of model representations that is grounded on the underlying theory of the language to be evaluated, and on theoretical frameworks in cognitive science.