Dynamic sound rendering based on ray-caching

  • Authors:
  • Ken Chan;Rynson W. H. Lau;Jianmin Zhao

  • Affiliations:
  • Department of Computer Science, City University of Hong Kong, Hong Kong and Department of Computer Science, University of Durham, United Kingdom;Department of Computer Science, University of Durham, United Kingdom and College of Math., Physics and Info. Engineering, Zhejiang Normal University, China;College of Math., Physics and Info. Engineering, Zhejiang Normal University, China

  • Venue:
  • PCM'07 Proceedings of the multimedia 8th Pacific Rim conference on Advances in multimedia information processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dynamic sound rendering is attracting a lot of attention in recent years due to its applications in computer games and architecture simulation. Although physical based methods can produce realistic outputs, they typically involve recursive tracing of sound rays, which may be computationally too expensive for interactive dynamic environments. In this paper, we propose a ray caching method that exploits ray coherence to accelerate the ray-tracing process. The proposed method is tailored for interactive sound rendering based on two approximation techniques: spatial and angular approximation. The ray cache supports intra-frame, inter-frame and inter-observer sharing of rays. We show the performance of the new method through a number of experiments.