Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Grammar fragment acquisition using syntactic and semantic clustering
Speech Communication
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
Automatic labeling of semantic roles
Computational Linguistics
A framework for rapid development of multimodal interfaces
Proceedings of the 5th international conference on Multimodal interfaces
The FrameNet tagset for frame-semantic and syntactic coding of predicate-argument structure
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Ambiguities in Sketch-Based Interfaces
HICSS '07 Proceedings of the 40th Annual Hawaii International Conference on System Sciences
Mobile multimodal applications on mass-market devices: experiences
DEXA '07 Proceedings of the 18th International Conference on Database and Expert Systems Applications
UI on the Fly: generating a multimodal user interface
HLT-NAACL-Short '04 Proceedings of HLT-NAACL 2004: Short Papers
Multimodal architectures: issues and experiences
OTM'06 Proceedings of the 2006 international conference on On the Move to Meaningful Internet Systems: AWeSOMe, CAMS, COMINF, IS, KSinBIT, MIOS-CIAO, MONET - Volume Part I
Hi-index | 0.00 |
Despite optimistic expectations, the spread of multimodal mobile applications is proceeding slowly. Nevertheless the power of new high-end devices gives the opportunity to create a new class of application with advanced synergic multimodal features. In this paper we present the results the CHAT group achieved in defining and building a platform for developing synergic mobile multimodal services. CHAT is a project co-funded by Italian Ministry of Research, aimed at providing multimodal context-sensitive services to mobile users. Our architecture is based on the following key concepts: thin client approach, modular client interface, asynchronous content push, distributed recognition, natural language processing, speech driven semantic fusion. The core of the system is based on a mix of web and telecommunication technologies. This choice proved to be very useful to create high personalized context sensitive services. One of the main features is the possibility to push appropriate contents on the user terminal reducing unfriendly user interactions.