When and Why Are Visual Landmarks Used in Giving Directions?
COSIT 2001 Proceedings of the International Conference on Spatial Information Theory: Foundations of Geographic Information Science
Presence in virtual environments as a function of visual and auditory cues
VRAIS '95 Proceedings of the Virtual Reality Annual International Symposium (VRAIS'95)
3D Audio Augmented Reality: Implementation and Experiments
ISMAR '03 Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality
Auditory bias of visual attention for perceptually-guided selective rendering of animations
GRAPHITE '05 Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia
Virtual Reality
Presence and the utility of audio spatialization
Presence: Teleoperators and Virtual Environments
Two Corticotectal Areas Facilitate Multisensory Orientation Behavior
Journal of Cognitive Neuroscience
Spatial sound localization in an augmented reality environment
OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
Progressive perceptual audio rendering of complex scenes
Proceedings of the 2007 symposium on Interactive 3D graphics and games
Interactions in Perceived Quality of Auditory-Visual Displays
Presence: Teleoperators and Virtual Environments
Using virtual environments to assess time-to-contact judgments from pedestrian viewpoints
ACM Transactions on Applied Perception (TAP)
A user study of auditory versus visual interfaces for use while driving
International Journal of Human-Computer Studies
ACM Transactions on Applied Perception (TAP)
Bimodal perception of audio-visual material properties for virtual environments
ACM Transactions on Applied Perception (TAP)
Hi-index | 0.00 |
Audio rendering is generally used to increase the realism of virtual environments (VE). In addition, audio rendering may also improve the performance in specific tasks carried out in interactive applications such as games or simulators. In this article we investigate the effect of the quality of sound rendering on task performance in a task which is inherently vision-dominated. The task is a virtual traffic gap-crossing scenario with two elements: first, to discriminate crossable and uncrossable gaps in oncoming traffic, and second, to find the right timing to start crossing the street without an accident. A study was carried out with 48 participants in an immersive virtual environment setup with a large screen and headphones. Participants were grouped into three different scenarios. In the first one, spatialized audio rendering with head-related transfer function (HRTF) filtering was used. The second group was tested with conventional stereo rendering, and the remaining group ran the experiment in a mute condition. Our results give a clear evidence that spatialized audio improves task performance compared to the unimodal mute condition. Since all task-relevant information was in the participants' field-of-view, we conclude that an enhancement of task performance results from a bimodal advantage due to the integration of visual and auditory spatial cues.