Spatialized audio rendering for immersive virtual environments
VRST '02 Proceedings of the ACM symposium on Virtual reality software and technology
From remote media immersion to Distributed Immersive Performance
ETP '03 Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence
CyberSeer: 3D audio-visual immersion for network security and management
Proceedings of the 2004 ACM workshop on Visualization and data mining for computer security
Presence: Teleoperators and Virtual Environments
Wave field synthesis for 3D audio: architectural prospectives
Proceedings of the 6th ACM conference on Computing frontiers
A stereo crosstalk cancellation system based on the common-acoustical pole/zero model
EURASIP Journal on Advances in Signal Processing - Special issue on digital audio effects
Custom architecture for multicore audio beamforming systems
ACM Transactions on Embedded Computing Systems (TECS) - Special issue on application-specific processors
Hi-index | 0.00 |
Immersive audio systems can be used to render virtual sound sources in three-dimensional (3-D) space around a listener. This is achieved by simulating the head-related transfer function (HRTF) amplitude and phase characteristics using digital filters. In this paper, we examine certain key signal processing considerations in spatial sound rendering over headphones and loudspeakers. We address the problem of crosstalk inherent in loudspeaker rendering and examine two methods for implementing crosstalk cancellation and loudspeaker frequency response inversion in real time. We demonstrate that it is possible to achieve crosstalk cancellation of 30 dB using both methods, but one of the two (the Fast RLS Transversal Filter Method) offers a significant advantage in terms of computational efficiency. Our analysis is easily extendable to nonsymmetric listening positions and moving listeners