Integrating gesture and snapping into a user interface toolkit
UIST '90 Proceedings of the 3rd annual ACM SIGGRAPH symposium on User interface software and technology
Data Engineering - Special issue on multimedia information systems
Perceptual user interfaces: multimodal interfaces that process what comes naturally
Communications of the ACM
Multimodal interface for human-machine communication
Multimodal interface for human-machine communication
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Designing Transition Networks for Multimodal VR-Interactions Using a Markup Language
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
SmartKom: adaptive and flexible multimodal access to multiple applications
Proceedings of the 5th international conference on Multimodal interfaces
Proceedings of the 6th international conference on Multimodal interfaces
A conceptual framework for developing adaptive multimodal applications
Proceedings of the 11th international conference on Intelligent user interfaces
Gesture recognition with a Wii controller
Proceedings of the 2nd international conference on Tangible and embedded interaction
Accessible Multimodal Web Pages with Sign Language Translations for Deaf and Hard of Hearing Users
DEXA '09 Proceedings of the 2009 20th International Workshop on Database and Expert Systems Application
Wearable multimodal interface for helping visually handicapped persons
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
Hi-index | 0.00 |
People communicate with each other using different ways, such as words, gestures, etc. to give information about their status, emotions and intentions. But how may this information be described in a way that autonomous systems (e.g. Robots) can react with a human being in a given environment? A multimodal interface allows a more flexible and natural interaction between a user and a computing system. This paper presents a methodological approach for designing an architecture that facilitates the work of a fusion engine. The selection of modalities and the fusion of events invoked by the fusion engine are based upon the definition of an ontology that describes the environment where a multimodal interaction system exists.