Information modeling in the time of the revolution
Information Systems - Special issue: selected papers from the 9th International Conference on advanced information systems engineering (CA ISE '97)
Experimentation in software engineering: an introduction
Experimentation in software engineering: an introduction
Data & Knowledge Engineering - Special issue: Quality in conceptual modeling
A Framework for Empirical Evaluation of Model Comprehensibility
MISE '07 Proceedings of the International Workshop on Modeling in Software Engineering
How intuitive is object-oriented design?
Communications of the ACM - Web searching in a multilingual world
Empirical Software Engineering
Assessing the impact of hierarchy on model understandability --- a cognitive perspective
MODELS'11 Proceedings of the 2011th international conference on Models in Software Engineering
Understanding understandability of conceptual models --- what are we actually talking about?
ER'12 Proceedings of the 31st international conference on Conceptual Modeling
Hi-index | 0.00 |
To empirically investigate conceptual modeling languages, subjects are typically confronted with experimental tasks, such as the creation, modification or understanding of conceptual models. Thereby, accuracy, i.e., the amount of correctly performed tasks divided by the number of total tasks, is usually used to assess performance. Even though accuracy is widely adopted, it is connected to two often overlooked problems. First, accuracy is a rather insensitive measure. Second, for tasks of low complexity, the measurement of accuracy may be distorted by peculiarities of the human mind. In order to tackle these problems, we propose to additionally assess the subject's mental effort, i.e., the mental resources required to perform a task. In particular, we show how aforementioned problems connected to accuracy can be resolved, that mental effort is a valid measure of performance and how mental effort can easily be assessed in empirical research.