Real-time GPU-based convolution: a follow-up

  • Authors:
  • Brent Cowan;Bill Kapralos

  • Affiliations:
  • University of Ontario Institute of Technology, Oshawa, Ontario, Canada;University of Ontario Institute of Technology, Oshawa, Ontario, Canada

  • Venue:
  • Future Play '09 Proceedings of the 2009 Conference on Future Play on @ GDC Canada
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The generation of spatial audio is computationally very demanding and therefore, accurate spatial audio is typically overlooked in games and virtual environment applications thus leading to a decrease in both performance and the user's sense of presence or immersion. Previous work has examined the application of the graphics processing unit to the generation of real-time spatial audio. In particular, a GPU-based convolution method was developed that allowed for real-time convolution between an arbitrarily sized auditory signal and a filter. Despite the large computational savings, that GPU-based method introduced noise/artifacts to the lower-order bytes of the resulting output signal which may have resulted in a number of perceptual consequences. This work builds upon the previous GPU-based convolution method and describe a GPU-based convolution method that employs a superior GPU that eliminates the noise/artifacts of the previous method and provides further computational savings.