Artificial Intelligence
Annotating the real world with knowledge-based graphics on a see-through head-mounted display
Proceedings of the conference on Graphics interface '92
The world through the computer: computer augmented interaction with real world environments
Proceedings of the 8th annual ACM symposium on User interface and software technology
Audio augmented reality: a prototype automated tour guide
CHI '95 Conference Companion on Human Factors in Computing Systems
Audio aura: light-weight audio augmented reality
Proceedings of the 10th annual ACM symposium on User interface software and technology
Virtually Documented Environments
ISWC '97 Proceedings of the 1st IEEE International Symposium on Wearable Computers
Stochasticks: Augmenting the Billiards Experience with Probabilistic Vision and Wearable Computers
ISWC '97 Proceedings of the 1st IEEE International Symposium on Wearable Computers
Probabilistic Object Recognition Using Multidimensional Receptive Field Histograms
ICPR '96 Proceedings of the 13th International Conference on Pattern Recognition - Volume 2
Vehicles capable of dynamic vision
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Detecting eye contact using wearable eye-tracking glasses
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Learning to recognize daily actions using gaze
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
DyPERS, 'Dynamic Personal Enhanced Reality System', uses augmented reality and computer vision to autonomously retrieve 'media memories' based on associations with real objects the user encounters. These are evoked as audio and video clips relevant for the user and overlayed on top of real objects the user encounters. The system utilizes an adaptive, audio-visual learning system on a tetherless wearable computer. The user's visual and auditory scene is stored in real-time by the system (upon request) and is then associated (by user input) with a snap shot of a visual object. The object acts as a key such that when the real-time vision system detects its presence in the scene again, DyPERS plays back the appropriate audio-visual sequence. The vision system is a probabilistic algorithm which is capable of discriminating between hundreds of everyday objects under varying viewing conditions (view changes, lighting, etc.). Once an audio-visual clip is stored, the vision system automatically recalls it and plays it back when it detects the object that the user wished to use to remind him of the sequence. The DyPERS interface augments the user without encumbering him and effectively mimics a form of audio-visual memory. First results on performance and usability are shown.