Two-handed gesture in multi-modal natural dialog
UIST '92 Proceedings of the 5th annual ACM symposium on User interface software and technology
Time synchronization in ad hoc networks
MobiHoc '01 Proceedings of the 2nd ACM international symposium on Mobile ad hoc networking & computing
Wearable Computing Meets Ubiquitous Computing: Reaping the Best of Both Worlds
ISWC '99 Proceedings of the 3rd IEEE International Symposium on Wearable Computers
A General Framework for Object Detection
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
Smart Environments: Technology, Protocols and Applications (Wiley Series on Parallel and Distributed Computing)
How smart are our environments? An updated look at the state of the art
Pervasive and Mobile Computing
Multimodal Interfaces: A Survey of Principles, Models and Frameworks
Human Machine Interaction
Generic Framework for Transforming Everyday Objects into Interactive Surfaces
Proceedings of the 13th International Conference on Human-Computer Interaction. Part III: Ubiquitous and Intelligent Interaction
Combining monoSLAM with object recognition for scene augmentation using a wearable camera
Image and Vision Computing
Humans and smart environments: a novel multimodal interaction approach
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Opportunistic synergy: a classifier fusion engine for micro-gesture recognition
Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Functional gestures for human-environment interaction
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
In this paper we describe ARAMIS a novel hybrid approach aiming to enhance the human smart-environment interaction. We define this approach as hybrid since it is the combination of three different dichotomies: wearable and pervasive computing paradigms, virtual and real worlds, optical and nonoptical sensing technologies. In order to validate the proposed approach we have designed a multimodal framework, in which gestures have been chosen as the main interaction modality. The framework design aims firstly to efficiently manage and merge information from heterogeneous, distributed sensors and secondly to offer a simple tool to connect together such devices. Finally a prototype has been developed in order to test and evaluate the proposed approach.