3-D sound for virtual reality and multimedia
3-D sound for virtual reality and multimedia
The design of multidimensional sound interfaces
Virtual environments and advanced interface design
Earcons and icons: their structure and common design principles
Human-Computer Interaction
Auditory representations of a graphical user interface for a better human-computer interaction
CMMR/ICAD'09 Proceedings of the 6th international conference on Auditory Display
Soundscapes and repertory grids: comparing listeners' and a designer's experiences
Proceedings of the 30th European Conference on Cognitive Ergonomics
Overview of auditory representations in human-machine interfaces
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Virtual audio simulation uses head-related transfer function (HRTF) synthesis and headphone playback to create a sound field similar to real-life environments. Localization performance is influenced by parameters such as the recording method and the spatial resolution of the HRTFs, equalization of the measurement chain as well as common headphone playback errors. The most important errors are in-the-head localization and front-back reversals. Among other cues, small movements of the head are considered to be important to avoid these phenomena. This study uses the BEACHTRON sound card and its HRTFs for emulating small head-movements by randomly moving the virtual sound source to emulate head-movements. This method does not need any additional equipment, sensors, or feedback. Fifty untrained subjects participated in the listening tests using different stimuli and presentation speed. A virtual target source was rendered in front of the listener by random movements of 1°-7°. Experiments showed that this kind of simulation can be helpful to resolve in-the-head localization, but there is no clear benefit for resolving front-back errors. Emulation of small head-movements of 2° could actually increase externalization rates in about 21% of the subjects while presentation speed is not significant.