Evaluating the quality of information models: empirical testing of a conceptual model quality framework

  • Authors:
  • Daniel L. Moody;Guttorm Sindre;Terje Brasethvik;Arne Sølvberg

  • Affiliations:
  • Charles University, Prague, Czech Republic;Norwegian University of Science and Technology, Trondheim, Norway;Norwegian University of Science and Technology, Trondheim, Norway;Norwegian University of Science and Technology, Trondheim, Norway

  • Venue:
  • Proceedings of the 25th International Conference on Software Engineering
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper conducts an empirical analysis of a semiotics-based quality framework for quality assuring information models. 192 participants were trained in the concepts of the quality framework, and used it to evaluate models represented in an extended Entity Relationship (ER) language. A randomised, double-blind design was used, in which each participant independently reviewed multiple models and each model was evaluated by multiple reviewers. A combination of quantitative and qualitative analysis techniques were used to evaluate the results, including reliability analysis, validity analysis, interaction analysis, influence analysis, defect pattern analysis and task accuracy analysis. An analysis was also conducted of the framework's likelihood of adoption in practice. The study provides strong support for the validity of the framework and suggests that it is likely to be adopted in practice, but raises questions about its reliability and the ability of participants to use it to accurately identify defects. The research findings provide clear directions for improvement of the framework. The research methodology used provides a general approach to empirical validation of quality frameworks.