New approaches to object processing in engineering databases

  • Authors:
  • Theo Härder

  • Affiliations:
  • University Kalserslautern, West Germany

  • Venue:
  • OODS '86 Proceedings on the 1986 international workshop on Object-oriented database systems
  • Year:
  • 1986

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is widely recognized that conventional database management systems (DBMS) offer poor support and unsatisfactory performance for applications of the various engineering disciplines. Many weaknesses are already identified by an increasing number of researchers in the DBMS field, including architectural properties, data model and storage structures. In this context we have performed broad prototype investigations of three engineering applications. In a first problem analysis, summarizing our empirical investigations, we have concluded the following major reasons for the clumsy representation and management of objects and consequently for the huge overhead in conventional DBMS applications:modeling of n:m-relationships is tedious and cumbersomedata handling without locality preservation is overly expensiveobject-supporting interfaces need efficient access to sets of records of heterogeneous types.To improve this situation, we come up with new proposals for an object-oriented data model, for a decomposition approach of complex database operations to execute parts of them in parallel, and for new processing models connecting the DBMS and the application program. Our further considerations are based on a new type of DBMS architecture - the so-called DBMS kernel architecture consisting of an application-independent DBMS kernel and an additional layer tailored to the specific needs of the application. This application layer maps the objects/operations (ADT's) of the application interface (OSI, object-supporting interface) to the data model interface (DMI) of the underlying kernel.For the DMI, we propose an object-oriented data model called Molecule Atom Data model (MAD model), allowing for symmetric treatment of m:n-relationships. The design goal was a consistent extension from the processing of homogeneous to heterogeneous record sets (also called atom sets), forming the desired molecules. Molecule processing includes schema definition, specification of integrity constraints, and dynamic definition and derivation of molecules as well as manipulation operations. These concepts enable the support for molecular objects, i.e. firstly, modeling techniques describing the structure of a complex/composite object as well as the object as an integral entity. Secondly, a clear and precise operational semantic is possible which comprises the manipulation of the entire molecule as well as the processing of components thereof (molecule/component insertion, deletion, selection and modification). Thirdly, a support of more structural integrity (referential integrity) is conceivable.A second idea is the use of parallelism inherent in complex operations at the OSI. Such operations may be conveniently decomposed thereby excluding conflicts at the logical level, and preplanned according to the following principles:to decompose an OSI-operation (e.g. ADT) such that the sub-operations match with the granules at the MAD interface (decomposition units: DU) to execute sets of sequence-independent DU's in parallel (parallel execution unit: PEU)This decomposition and parallelization concept has some consequences:At the programming level, the Parbegin-Parend construct combined with the remote procedure call mechanism is used for invoking parallel actions.A nested transaction concept serves to control the system activity.The operations within a transaction must be synchronized against the operations of others.Rollback of completed subtransactions is necessary when a parent subtransaction fails.Since operations in engineering applications generate huge workload for a DBMS, the proposed concepts may not be sufficient to guarantee acceptable response times in a construction environment. New models of object processing exploiting locality of reference on application objects may further improve the DBMS performance. Starting from this key observation, we are currently developing new processing models for DBMS interactions. This uses the following framework:In a Checkout phase the object (molecule or molecule components) is transferred to the object buffer and after modification it is moved back to the DBMS kernel using the Checkin phase.There are two conceivable alternatives for the object buffer placement: Either the object buffer is integrated in the application yielding structure-oriented processing, i.e. direct modification by the applications programs. Or the object buffer lies within the application layer, yielding object-oriented processing by offering ADT operations at the OSI. The guidelines, deviate from the traditional processing model. They promise a substantial increase in performance, but they introduce new problems and tradeoffs, in particular, mapping accumulated changes and checking their integrity. On the other hand, they may be combined with the ideas introduced above leading to the demanded non-standard DBMS.