Creating Interactive Virtual Auditory Environments

  • Authors:
  • Tapio Lokki;Lauri Savioja;Riitta Väänänen;Jyri Huopaniemi;Tapio Takala

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • IEEE Computer Graphics and Applications
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sound rendering is the analogue of graphics rendering to create virtual auditory environments. In graphics, images are created by calculating the distribution of light within a modeled environment. Illumination methods such as raytracing and radiosity are based on the physics of light propagation and reflection. Similarly, sound rendering is based on physical laws of sound propagation and reflection. In this article we aim to clarify real-time sound rendering techniques by comparing them to rendering of visual images. We also describe how sound rendering can be performed, based on the knowledge of sound source(s) and listener locations, radiation characteristics of sound sources, geometry of the 3D model, and material absorption data, in other words, the congruent data used for graphics rendering. In this context the term auralization, making audible, corresponds to visualization. Applications of sound rendering vary from film effects, computer games and other multimedia content to enhancing presence experience in virtual reality.