Using wavelets to synthesize stochastic-based sounds for immersive virtual environments

  • Authors:
  • Nadine E. Miner;Thomas P. Caudell

  • Affiliations:
  • Sandia National Laboratories;University of New Mexico, Albuquerque, NM

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Stochastic, or nonpitched, sounds fill our real-world environment. Humans almost continuously hear stochastic sounds, such as wind, rain, motor sounds, and different types of impact sounds. Because of their prevalence in real-world environments, it is important to include these types of sounds for realistic virtual environment simulations. This paper describes a synthesis approach that uses wavelets for modeling stochastic-based sounds. Parameterizations of the wavelet models yield a variety of related sounds from a small set of models. The result is dynamic sound models that can change according to changes in the virtual environment. This paper contains a description of the sound synthesis process, several developed models, and the on-going perceptual experiments for validating the sound synthesis veracity. The developed models and results demonstrate proof of the concept and illustrate the potential of this approach.