On building immersive audio applications using robust adaptive beamforming and joint audio-video source localization

  • Authors:
  • J. A. Beracoechea;S. Torres-Guijarro;L. García;F. J. Casajús-Quirós

  • Affiliations:
  • Departamento de Señales, Sistemas y Radiocomunicaciones, Universidad Politécnica de Madrid, Madrid, Spain;Departamento de Señales, Sistemas y Radiocomunicaciones, Universidad Politécnica de Madrid, Madrid, Spain;Departamento de Señales, Sistemas y Radiocomunicaciones, Universidad Politécnica de Madrid, Madrid, Spain;Departamento de Señales, Sistemas y Radiocomunicaciones, Universidad Politécnica de Madrid, Madrid, Spain

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper deals with some of the different problems, strategies, and solutions of building true immersive audio systems oriented to future communication applications. The aim is to build a system where the acoustic field of a chamber is recorded using a microphone array and then is reconstructed or rendered again, in a different chamber using loudspeaker array-based techniques. Our proposal explores the possibility of using recent robust adaptive beamforming techniques for effectively estimating the original sources of the emitting room. A joint audio-video localization method needed in the estimation process as well as in the rendering engine is also presented. The estimated source signal and the source localization information drive a wave field synthesis engine that renders the acoustic field again at the receiving chamber. The system performance is tested using MUSHRA-based subjective tests.