3-D sound for virtual reality and multimedia
3-D sound for virtual reality and multimedia
Improving audio conferencing: are two ears better than one?
CSCW '06 Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work
Making the mainstream accessible: redefining the game
Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames
ICVR'07 Proceedings of the 2nd international conference on Virtual reality
Hi-index | 0.00 |
Recent laboratory studies have demonstrated that properly designed uses of 3-dimensional (3D) sound are an effective technique for managing attention, improving performance, balancing workload, and reducing effort in information-rich decision environments (Brock, Stroup, & Ballas, 2002). However, a number of practical difficulties remain to be addressed before these benefits can be operationalized in real-world settings where listeners must regularly cope with extraneous noise and listen for auditory information in the external environment that may be germane to their assignments. In many routine operational environments, such concerns make fixed, event-driven auditory display designs readily susceptible to auditory masking and presentation conflicts. Strategies for overcoming these particular vulnerabilities are needed to avoid unacceptable levels of performance and information loss. Advances in sound processing technology and artificial intelligence techniques make it possible to imagine auditory displays that are capable of self-organizing presentation behaviors, but a body of perceptual studies must be carried out to support the manipulations such a system would implement. In this short paper, the notion of a self-organizing auditory display is characterized and relevant auditory perception research at the Naval Research Laboratory (NRL) is described.