Interfaces for advanced computing
Scientific American
Grasping reality through illusion—interactive graphics serving science
CHI '88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Windows on the world: 2D windows for 3D augmented reality
UIST '93 Proceedings of the 6th annual ACM symposium on User interface software and technology
Location Models for Augmented Environments
Personal and Ubiquitous Computing
Managing Networks Through a Virtual World
IEEE Parallel & Distributed Technology: Systems & Technology
The Animation of Autonomous Actors Based on Production Rules
CA '96 Proceedings of the Computer Animation
Hi-index | 0.00 |
As with most research in information displays, virtual displays have generally emphasized visual information. Many investigators, however, have pointed out the importance of the auditory system as an information channel. We believe that a three-dimensional auditory display can substantially enhance situational awareness by combining spatial and semantic information to form dynamic, multidimensional patterns of acoustic events which convey meaning about objects in the spatial world of the user. Such a display can be realized with an array of real sound sources or loudspeakers (Doll et. al., 1986). The signal-processing device being developed at NASA-Ames maximizes flexibility and portability by synthetically generating three-dimensional sound in realtime for delivery through headphones. Unlike conventional stereo, sources can be perceived outside the head at discrete distances and directions from the listener. The 3-D auditory display is currently being integrated with Ames' Virtual Interactive Environment Workstation (VIEW) which allows the user to explore and interact with a 360-degree synthesized or remotely-sensed world using a head-mounted, wide-angle, stereoscopic display controlled by operator position, voice, and gesture.Applications of a three-dimensional auditory display involve any context in which the user's spatial awareness is important, particularly when visual cues are limited or absent. Examples include advanced teleconferencing environments, monitoring telerobotic activities in hazardous situations, and scientific "visualization" of multi-dimensional data. (e.g., Doll, et. al., 1986; Foley, 1987; Fisher, et. al., 1988; Brooks, 1988).