3D-audio matting, postediting, and rerendering from field recordings

  • Authors:
  • Emmanuel Gallo;Nicolas Tsingos;Guillaume Lemaitre

  • Affiliations:
  • Rendu & Environnements Virtuel Sonoriséés, Institut National de Recherche en Informatique et en Automatique, Sophia-Antipolis Cedex, France and Centre Scientifique et Technique du Bâ ...;Rendu & Environnements Virtuel Sonoriséés, Institut National de Recherche en Informatique et en Automatique, Sophia-Antipolis Cedex, France;Rendu & Environnements Virtuel Sonoriséés, Institut National de Recherche en Informatique et en Automatique, Sophia-Antipolis Cedex, France

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a novel approach to real-time spatial rendering of realistic auditory environments and sound sources recorded live, in the field. Using a set of standard microphones distributed throughout a real-world environment, we record the sound field simultaneously from several locations. After spatial calibration, we segment from this set of recordings a number of auditory components, together with their location. We compare existing time delay of arrival estimation techniques between pairs of widely spaced microphones and introduce a novel efficient hierarchical localization algorithm. Using the high-level representation thus obtained, we can edit and rerender the acquired auditory scene over a variety of listening setups. In particular, we can move or alter the different sound sources and arbitrarily choose the listening position. We can also composite elements of different scenes together in a spatially consistent way. Our approach provides efficient rendering of complex soundscapes which would be challenging to model using discrete point sources and traditional virtual acoustics techniques. We demonstrate a wide range of possible applications for games, virtual and augmented reality, and audio visual post production.