Toolglass and magic lenses: the see-through interface
SIGGRAPH '93 Proceedings of the 20th annual conference on Computer graphics and interactive techniques
Input devices and interaction techniques for advanced computing
Virtual environments and advanced interface design
Multimodal system processing in mobile environments
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Speech and gesture multimodal control of a whole Earth 3D visualization environment
VISSYM '02 Proceedings of the symposium on Data Visualisation 2002
ISWC '97 Proceedings of the 1st IEEE International Symposium on Wearable Computers
Speaking and Listening on the Run: Design for Wearable Audio Computing
ISWC '98 Proceedings of the 2nd IEEE International Symposium on Wearable Computers
ISWC '02 Proceedings of the 6th IEEE International Symposium on Wearable Computers
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
Proceedings of the 5th international conference on Multimodal interfaces
Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10
Interactive Tools for Virtual X-Ray Vision in Mobile Augmented Reality
ISMAR '04 Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
A Wizard of Oz study for an AR multimodal interface
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
A multimodal labeling interface for wearable computing
Proceedings of the 15th international conference on Intelligent user interfaces
User-defined gestures for augmented reality
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.01 |
Wearable computers and their novel applications demand more context-specific user interfaces than traditional desktop paradigms can offer. This article describes a multimodal interface and explains how it enhances a mobile user's situational awareness and how it provides new functionality. This mobile, augmented-reality system visualizes otherwise invisible information encountered in urban environments. A versatile filtering tool allows interactive display of occluded infrastructure and of dense data distributions, such as room temperature or wireless network strength, with applications for building maintenance, emergency response, and reconnaissance missions. To control this complex application functionality in the real world, the authors combine multiple input modalities--vision-based hand gesture recognition, a 1D tool, and speech recognition--with three late integration styles to provide intuitive and effective input means. The system is demonstrated in a realistic indoor and outdoor task environment, and preliminary user experiences are described. The authors postulate that novel interaction metaphors must be developed together with user interfaces that are capable of controlling them.