Locking performance in centralized databases

  • Authors:
  • Y. C. Tay;Nathan Goodman;R. Suri

  • Affiliations:
  • National Univ. of Singapore, Kent Ridge, Republic of Singapore;Harvard Univ., Cambridge, MA;Harvard Univ., Cambridge, MA

  • Venue:
  • ACM Transactions on Database Systems (TODS)
  • Year:
  • 1985

Quantified Score

Hi-index 0.01

Visualization

Abstract

An analytic model is used to study the performance of dynamic locking. The analysis uses only the steady-state average values of the variables. The solution to the model is given by a cubic, which has exactly one valid root for the range of parametric values that is of interest. The model's predictions agree well with simulation results for transactions that require up to twenty locks. The model separates data contention from resource contention, thus facilitating an analysis of their separate effects and their interaction. It shows that systems with a particular form of nonuniform access, or with shared locks, are equivalent to systems with uniform access and only exclusive locks.Blocking due to conflicts is found to impose an upper bound on transaction throughput; this fact leads to a rule of thumb on how much data contention should be permitted in a system. Throughput can exceed this bound if a transaction is restarted whenever it encounters a conflict, provided restart costs and resource contention are low. It can also be exceeded by making transactions predeclare their locks. Raising the multiprogramming level to increase throughput also raises the number of restarts per completion. Transactions should minimize their lock requests, because data contention is proportional to the square of the number of requests. The choice of how much data to lock at a time depends on which part of a general granularity curve the system sees.