Enhanced speculative parallelization via incremental recovery

  • Authors:
  • Chen Tian;Changhui Lin;Min Feng;Rajiv Gupta

  • Affiliations:
  • University of California at Riverside, Riverside, CA, USA;University of California at Riverside, Riverside, CA, USA;University of California at Riverside, Riverside, CA, USA;University of California at Riverside, Riverside, CA, USA

  • Venue:
  • Proceedings of the 16th ACM symposium on Principles and practice of parallel programming
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The widespread availability of multicore systems has led to an increased interest in speculative parallelization of sequential programs using software-based thread level speculation. Many of the proposed techniques are implemented via state separation where non-speculative computation state is maintained separately from the speculative state of threads performing speculative computations. If speculation is successful, the results from speculative state are committed to non-speculative state. However, upon misspeculation, discard-all scheme is employed in which speculatively computed results of a thread are discarded and the computation is performed again. While this scheme is simple to implement, one disadvantage of discard-all is its inability to tolerate high misspeculation rates due to its high runtime overhead. Thus, it is not suitable for use in applications where misspeculation rates are input dependent and therefore may reach high levels. In this paper we develop an approach for incremental recovery in which, instead of discarding all of the results and reexecuting the speculative computation in its entirety, the computation is restarted from the earliest point at which a misspeculation causing value is read. This approach has two advantages. First, the cost of recovery is reduced as only part of the computation is reexecuted. Second, since recovery takes less time, the likelihood of future misspeculations is reduced. We design and implement a strategy for implementing incremental recovery that allows results of partial computations to be efficiently saved and reused. For a set of programs where misspeculation rate is input dependent, our experiments show that with inputs that result in misspeculation rates of around 40% and 80%, applying incremental recovery technique results in 1.2x-3.3x and 2.0x-6.6x speedups respectively over the discard-all recovery scheme. Furthermore, misspeculations observed during discard-all scheme are reduced when incremental recovery is employed -- reductions range from 10% to 85%.