Does better throughput require worse latency?

  • Authors:
  • David Ungar;Doug Kimelman;Sam Adams;Mark Wegman

  • Affiliations:
  • IBM Research, Yorktown Heights, NY, USA;IBM Research, Yorktown Heights, NY, USA;IBM Research, Yorktown Heights, NY, USA;IBM Research, Yorktown Heights, NY, USA

  • Venue:
  • Proceedings of the 2012 ACM workshop on Relaxing synchronization for multicore and manycore scalability
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Let throughput denote the amount of application-level work performed in unit time, normalized to the amount of work that would be accomplished with perfect linear scaling. Let latency denote the mean time required for a thread on one core to observe a change effected by a thread on another core, normalized to the best latency possible for the given platform. Might it be true that algorithms that improve application-level throughput worsen inter-core application-level latency? As techniques for improving performance have evolved from mutex-and-locks to race-and-repair, each seems to have offered more throughput at the expense of increased latency.