How FIFO is your concurrent FIFO queue?

  • Authors:
  • Andreas Haas;Christoph M. Kirsch;Michael Lippautz;Hannes Payer

  • Affiliations:
  • University of Salzburg, Salzburg, Austria;University of Salzburg, Salzburg, Austria;University of Salzburg, Salzburg, Austria;University of Salzburg, Salzburg, Austria

  • Venue:
  • Proceedings of the 2012 ACM workshop on Relaxing synchronization for multicore and manycore scalability
  • Year:
  • 2012

Quantified Score

Hi-index 0.03

Visualization

Abstract

Designing and implementing high-performance concurrent data structures whose access performance scales on multicore hardware is difficult. Concurrent implementations of FIFO queues, for example, seem to require algorithms that efficiently increase the potential for parallel access by implementing semantically relaxed rather than strict FIFO queues where elements may be returned in some out-of-order fashion. However, we show experimentally that the on average shorter execution time of enqueue and dequeue operations of fast but relaxed implementations may offset the effect of semantical relaxations making them appear as behaving more FIFO than strict but slow implementations. Our key assumption is that ideal concurrent data structure operations should execute in zero time. We define two metrics, element-fairness and operation-fairness, to measure the degree of element and operation reordering, respectively, assuming operations take zero time. Element-fairness quantifies the deviation from FIFO queue semantics had all operations executed in zero time. With this metric even strict implementations of FIFO queues are not FIFO. Operation-fairness helps explaining element-fairness by quantifying operation reordering when considering the actual time operations took effect relative to their invocation time. In our experiments, the effect of poor operation-fairness of strict but slow implementations on element-fairness may outweigh the effect of semantical relaxation of fast but relaxed implementations.