Scalable analysis of conceptual data models

  • Authors:
  • Matthew J. McGill;Laura K. Dillon;R. E. K. Stirewalt

  • Affiliations:
  • Michigan State University, East Lansing, MI;Michigan State University, East Lansing, MI;LogicBlox, Inc., Atlanta, GA

  • Venue:
  • Proceedings of the 2011 International Symposium on Software Testing and Analysis
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Conceptual data models describe information systems without the burden of implementation details, and are increasingly used to generate code. They could also be analyzed for consistency and to generate test data except that the expressive constraints supported by popular modeling notations make such analysis intractable. In an earlier empirical study of conceptual models created at LogicBlox Inc., Smaragdakis, Csallner, and Subramanian found that a restricted subset of ORM, called ORM−, includes the vast majority of constraints used in practice and, moreover, allows scalable analysis. After that study, however, LogicBlox Inc. obtained a new ORM modeling tool, which supports discovery and specification of more complex constraints than the previous tool. We report findings of a follow-up study of models constructed using the more powerful tool. Our study finds that LogicBlox developers increasingly rely on a small number of features not in the ORM− subset. We extend ORM− with support for two of them: objectification and a restricted class of external uniqueness constraints. The extensions significantly improve our ability to analyze the ORM models created by developers using the new tool. We also show that a recent change to ORM has rendered the original ORM− algorithms unsound, in general; but that an efficient test suffices to show that these algorithms are in fact sound for the ORM− constraints appearing in any of the models currently in use at LogicBlox.