Synchronization via scheduling: managing shared state in video games

  • Authors:
  • Micah J. Best;Shane Mottishaw;Craig Mustard;Mark Roth;Alexandra Fedorova;Andrew Brownsword

  • Affiliations:
  • Simon Fraser University, Canada;Simon Fraser University, Canada;Simon Fraser University, Canada;Simon Fraser University, Canada;Simon Fraser University, Canada;Electronic Arts BlackBox, Vancouver, Canada

  • Venue:
  • HotPar'10 Proceedings of the 2nd USENIX conference on Hot topics in parallelism
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Video games are a performance hungry application domain with a complexity that often rivals operating systems. These performance and complexity issues in combination with tight development times and large teams means that consistent, specialized and pervasive support for parallelism is of paramount importance. The Cascade project is focused on designing solutions to support this application domain. In this paper we describe how the Cascade runtime extends the industry standard job/task graph execution model with a new approach for managing shared state. Traditional task graph models dictate that tasks making conflicting accesses to shared state must be linked by a dependency, even if there is no explicit logical ordering on their execution. In cases where it is difficult to understand if such implicit dependencies exist, the programer would create more dependencies than needed, which results in constrained graphs with large monolithic tasks and limited parallelism. By using the results of off-line code analysis and information exposed at runtime, the Cascade runtime automatically determines scenarios where implicit dependencies exist and schedules tasks to avoid data races. This technique is called Synchronization via Scheduling (SvS) and we present its two implementations. The first implementation uses Bloom filter based 'signatures' and the second relies on automatic data partitioning which has optimization potential independent of SvS. Our experiments show that SvS succeeds in achieving a high degree of parallelism and allows for finer grained tasks. However, we find that one consequence of sufficiently fine-grained tasks is that the time to dispatch them exceeds their execution time, even using a highly optimized scheduler/manager. Fine-grained tasks, however, are a necessary condition for sufficient parallelism and overall performance gains, so this finding motivates further inquiry into how tasks are managed.