DLFM: a transactional resource manager

  • Authors:
  • Hui-I Hsiao;Inderpal Narang

  • Affiliations:
  • IBM Almaden Research Center, San Jose, CA;IBM Almaden Research Center, San Jose, CA

  • Venue:
  • SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

The DataLinks technology developed at IBM Almaden Research Center and now available in DB2 UDB 5.2 introduces a new data type called DATALINK for a database to reference and manage files stored external to the database. An external file is put under a database control by “linking” the file to the database. Control to a file can also be removed by “unlinking” it. The technology provides transactional semantics with respect to linking or unlinking the file when DATALINK value is stored or updated. Further more, it provides the following set of properties: (1) managing access control to linked files, (2) enforcing referential integrity, such as referenced file cannot be deleted or renamed as long as it is referenced from the RDBMS, and (3) providing coordinated backup and recovery of RDBMS data with the file data.DataLinks File Manager (DLFM) is a key component of the DataLinks technology. DLFM is a sophisticated SQL application with a set of daemon processes residing at a file server node that work cooperatively with the host database server(s) to manage external files. To reduce the number of messages between database server and DLFM, DLFM maintains a set of meta data on the file system and the files that are under database control. One of the major decisions we made was to build DLFM on top of an existing database manager, such as DB2, instead of implementing a proprietary persistent data store. We have mixed feelings about using the RDBMS to build such a resource manager. One of the major challenges is to support transactional semantics for DLFM operations. To do this, we implemented the two-phase commit protocol in DLFM and designed an innovative scheme to enable rolling back transaction update after local database commit. Also a major gotchas is that the RDBMS' cost based optimizer generates the access plan, which does not take into account the locking costs of a concurrent workload. Using the RDBMS as a black box can cause “havoc” in terms of causing the lock timeouts and deadlocks and reducing the throughput of a concurrent workload. To solve the problem, we came up with a simple but effective way of influencing the optimizer to generate access plans matching the needs of DLFM implementation. Also several precautions had to be taken to ensure that lock escalation did not take place; next key locking was disabled to avoid deadlocks on heavily used indexes and SQL tables; and timeout mechanism was applied to break global deadlocks.We were able to run 100-client workload for 24 hours without much deadlock/timeout problem in system test. This paper describes the motivation for building the DLFM and the lessons that we have learned from this experience.