Localization with non-individualized virtual acoustic display cues
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
SIGGRAPH '92 Proceedings of the 19th annual conference on Computer graphics and interactive techniques
Audio augmented reality: a prototype automated tour guide
CHI '95 Conference Companion on Human Factors in Computing Systems
Whisper: a wristwatch style wearable handset
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Nomadic radio: speech and audio interaction for contextual messaging in nomadic environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction with mobile systems
Automatic generation of consistent shadows for augmented reality
GI '05 Proceedings of Graphics Interface 2005
N ≫ 2: multi-speaker display systems for virtual reality and spatial audio projection
ICAD'98 Proceedings of the 1998 international conference on Auditory Display
Augmented robot agent: Enhancing co-presence of the remote participant
ISMAR '08 Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality
A mobile augmented reality audio system with binaural microphones
Proceedings of Interacting with Sound Workshop: Exploring Context-Aware, Local and Social Audio Applications
Analytic review of usability evaluation in ISMAR
Interacting with Computers
Performance effects of multi-sensory displays in virtual teleoperation environments
Proceedings of the 1st symposium on Spatial user interaction
Hi-index | 0.00 |
We present a novel approach for mixing real and computer-generated audio for augmented reality (AR) applications. Analogous to optical-see-through and video-see-through techniques in the visual domain, we present Hear-Through and Mic-Through audio AR. Hear-Through AR uses a bone-conduction headset to deliver computer-generated audio, while leaving the ear canals free to receive audio from the surrounding environment. Mic-Through AR allows audio signals captured from ear-worn microphones to be mixed with computer-generated audio in the computer, and delivered to the user over headphones. We present preliminary results from an empirical user study conducted to compare a bone-conduction device, headphones, and a speaker array. The results show that subjects achieved the best accuracy using an array of speakers physically located around the listener when stationary sounds were played, but that there was no difference in accuracy between the speaker array and the bone-conduction device for sounds that were moving, and that both devices outperformed standard headphones for moving sounds.