Spatial sound for video games and virtual environments utilizing real-time GPU-based convolution

  • Authors:
  • Brent Cowan;Bill Kapralos

  • Affiliations:
  • University of Ontario Institute of Technology, Oshawa, Ontario, Canada;University of Ontario Institute of Technology, Oshawa, Ontario, Canada

  • Venue:
  • Future Play '08 Proceedings of the 2008 Conference on Future Play: Research, Play, Share
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The generation of spatial audio is computationally very demanding and therefore, accurate spatial audio is typically overlooked in games and virtual environments applications thus leading to a decrease in both performance and the user's sense of presence or immersion. Driven by the gaming industry and the great emphasis placed on the visual sense, consumer computer graphics hardware (and the graphics processing unit in particular), has greatly advanced in recent years, even outperforming the computational capacity of CPUs. This has allowed for real-time, interactive realistic graphics-based applications on typical consumer-level PCs. Despite the many similarities between the fields of spatial audio and computer graphics, computer graphics and image synthesis in particular, has advanced far beyond spatial audio given the emphasis placed on the generation of believable visual cues over other perceptual cues including auditory. Given the widespread use and availability of computer graphics hardware as well as the similarities that exist between the fields of spatial audio and image synthesis, this work investigates the application of graphics processing units for the computationally efficient generation of spatial audio for dynamic and interactive games and virtual environments. Here we present a real-time GPU-based convolution method and illustrate its superior efficiency to conventional, software-based, time-domain convolution.