Physically Based Sound Synthesis for Large-Scale Virtual Environments

  • Authors:
  • Nikunj Raghuvanshi;Ming C. Lin

  • Affiliations:
  • University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill

  • Venue:
  • IEEE Computer Graphics and Applications
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recorded sound clips have two main drawbacks. First, the sound generated is repetitive. Real sounds depend on how objects collide and where impact occurs, and prerecorded sound clips fail to capture such factors. Second, recording original sound clips for all the sound events in a virtual environment is a labor-intensive and tedious process. Physically based sound synthesis, on the other hand, can automatically capture the subtle shift of tone and timbre due to factors such as change in impact location, material property, and object geometry. The authors describe several techniques for accelerating sound simulation, thereby enabling realistic, physically based sound synthesis for large-scale virtual environments.