Perceptual audio rendering of complex virtual environments

  • Authors:
  • Nicolas Tsingos;Emmanuel Gallo;George Drettakis

  • Affiliations:
  • REVES/INRIA Sophia-Antipolis;REVES/INRIA Sophia-Antipolis;REVES/INRIA Sophia-Antipolis

  • Venue:
  • ACM SIGGRAPH 2004 Papers
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a real-time 3D audio rendering pipeline for complex virtual scenes containing hundreds of moving sound sources. The approach, based on auditory culling and spatial level-of-detail, can handle more than ten times the number of sources commonly available on consumer 3D audio hardware, with minimal decrease in audio quality. The method performs well for both indoor and outdoor environments. It leverages the limited capabilities of audio hardware for many applications, including interactive architectural acoustics simulations and automatic 3D voice management for video games.Our approach dynamically eliminates inaudible sources and groups the remaining audible sources into a budget number of clusters. Each cluster is represented by one impostor sound source, positioned using perceptual criteria. Spatial audio processing is then performed only on the impostor sound sources rather than on every original source thus greatly reducing the computational cost.A pilot validation study shows that degradation in audio quality, as well as localization impairment, are limited and do not seem to vary significantly with the cluster budget. We conclude that our real-time perceptual audio rendering pipeline can generate spatialized audio for complex auditory environments without introducing disturbing changes in the resulting perceived soundfield.