A modeling study of the TPC-C benchmark

  • Authors:
  • Scott T. Leutenegger;Daniel Dias

  • Affiliations:
  • ICASE: Institute for Computer Applications in Science and Engineering, Mail Stop 132c, NASA Langley Research Center, Hampton, VA;IBM Research Division, T.J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY

  • Venue:
  • SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

The TPC-C benchmark is a new benchmark approved by the TPC council intended for comparing database platforms running a medium complexity transaction processing workload. Some key aspects in which this new benchmark differs from the TPC-A benchmark are in having several transaction types, some of which are more complex than that in TPC-A, and in having data access skew. In this paper we present results from a modelling study of the TPC-C benchmark for both single node and distributed database management systems. We simulate the TPC-C workload to determine expected buffer miss rates assuming an LRU buffer management policy. These miss rates are then used as inputs to a throughput model. From these models we show the following: (i) We quantify the data access skew as specified in the benchmark and show what fraction of the accesses go to what fraction of the data. (ii) We quantify the resulting buffer hit ratios for each relation as a function of buffer size. (iii) We show that close to linear scale-up (about 3% from the ideal) can be achieved in a distributed system, assuming replication of a read-only table. (iv) We examine the effect of packing hot tuples into pages and show that significant price/performance benefit can be thus achieved. (v) Finally, by coupling the buffer simulations with the throughput model, we examine typical disk/memory configurations that maximize the overall price/performance.