The logic of typed feature structures
The logic of typed feature structures
Interactive Multimodal User Interfaces for Mobile Devices
HICSS '04 Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 9 - Volume 9
Finite-state multimodal parsing and understanding
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Multimodal interaction under exerted conditions in a natural field setting
Proceedings of the 6th international conference on Multimodal interfaces
Finite-state multimodal integration and understanding
Natural Language Engineering
A novel method for multi-sensory data fusion in multimodal human computer interaction
OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
Clavius: bi-directional parsing for generic multimodal interaction
COLING ACL '06 Proceedings of the 21st International Conference on computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
A hybrid grammar-based approach to multimodal languages specification
OTM'07 Proceedings of the 2007 OTM confederated international conference on On the move to meaningful internet systems - Volume Part I
Speech-to-speech translation in an assisted living lab
Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments
Data and Information Quality Issues in Ambient Assisted Living Systems
Journal of Data and Information Quality (JDIQ)
Gestures in assisted living environments
GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
Hi-index | 0.00 |
The increasing need to access information everywhere and at any time leads us to believe that future user interfaces, through which users interact with pervasive computing systems, must address both device and modality independence. The pervasive computing paradigm sees almost every object in the everyday environment as a system able to communicate with users and other systems in their own languages. The interaction between users and systems is therefore typically multimodal. The main challenge of multimodal interaction, that is also the main topic of this paper, lies in developing a framework that is able to process information derived from whatever input modalities, giving these inputs an appropriate representation and integrating these individual representations into a joint semantic interpretation. A description of this multimodal pervasive framework will be given in this paper, along with some details of its application in Ambient Assisted Living and the usability test that was implemented to validate its effectiveness.