Advances in modal analysis using a robust and multiscale method

  • Authors:
  • Cécile Picard;Christian Frisson;François Faure;George Drettakis;Paul G. Kry

  • Affiliations:
  • REVES, INRIA, Sophia Antipolis, France;TELE Lab, Université catholique de Louvain, Louvain-la-Neuve, Belgium;EVASION, INRIA, LJK, Rhône-Alpes Grenoble, France;REVES, INRIA, Sophia Antipolis, France;SOCS, McGill University, Montreal, Canada

  • Venue:
  • EURASIP Journal on Advances in Signal Processing - Special issue on digital audio effects
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.