Two-handed gesture in multi-modal natural dialog
UIST '92 Proceedings of the 5th annual ACM symposium on User interface software and technology
Extending Fitts' law to two-dimensional tasks
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A design space for multimodal systems: concurrent processing and data fusion
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Unencumbered Gestural Interaction
IEEE MultiMedia
Gesture recognition using the Perseus architecture
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Detection and Estimation of Pointing Gestures in Dense Disparity Maps
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
"Where Are You Pointing At?" A Study of Remote Collaboration in a Wearable Videoconference System
ISWC '99 Proceedings of the 3rd IEEE International Symposium on Wearable Computers
Towards integrated microplanning of language and iconic gesture for multimodal output
Proceedings of the 6th international conference on Multimodal interfaces
How smart are our environments? An updated look at the state of the art
Pervasive and Mobile Computing
Visual recognition of pointing gestures for human-robot interaction
Image and Vision Computing
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Video-rate localization in multiple maps for wearable augmented reality
ISWC '08 Proceedings of the 2008 12th IEEE International Symposium on Wearable Computers
Object recognition and localization while tracking and mapping
ISMAR '09 Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality
ARAMIS: toward a hybrid approach for human- environment interaction
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: towards mobile and intelligent interaction environments - Volume Part III
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Autonomic management of multimodal interaction: DynaMo in action
Proceedings of the 4th ACM SIGCHI symposium on Engineering interactive computing systems
Opportunistic synergy: a classifier fusion engine for micro-gesture recognition
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Functional gestures for human-environment interaction
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
In this paper, we describe a multimodal approach for human-smart environment interaction. The input interaction is based on three modalities: deictic gestures, symbolic gestures and isolated-words. The deictic gesture is interpreted using the PTAMM (Parallel Tracking and Multiple Mapping) method exploiting a camera handheld or worn on the user arm. The PTAMM algorithm tracks in real-time the position and orientation of the hand in the environment. This information is used to point real or virtual objects, previously added to the environment, using the optical camera axis. Symbolic hand-gestures and isolated voice commands are recognized and used to interact with the pointed target. Haptic and acoustic feedbacks are provided to the user in order to improve the quality of the interaction. A complete prototype has been realized and a first usability evaluation, assessed with the help of 10 users has shown positive results.