Tree visualization with tree-maps: 2-d space-filling approach
ACM Transactions on Graphics (TOG)
Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
XWand: UI for intelligent spaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
VisionWand: interaction techniques for large displays using a passive wand tracked in 3D
Proceedings of the 16th annual ACM symposium on User interface software and technology
Arm-Pointing Gesture Interface Using Surrounded Stereo Cameras System
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
Journal of Intelligent and Robotic Systems
The design of natural interaction
Multimedia Tools and Applications
Passive identification and control of arbitrary devices in smart environments
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: towards mobile and intelligent interaction environments - Volume Part III
Visual Support System for Selecting Reactive Elements in Intelligent Environments
CW '12 Proceedings of the 2012 International Conference on Cyberworlds
Hi-index | 0.00 |
In the last few years the number of intelligent systems has been growing rapidly and classical interaction devices like mouse and keyboard are replaced in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user's intention. In this work we present a method for supporting goal-based interaction using multimodal interaction systems. Combining speech and gesture we are able to compensate the insecurities of both interaction methods, thus improving intention recognition. Using a p&'rototypical system we have proven the usability of such a system in a qualitative evaluation.