Multimodal interaction for distributed interactive simulation
Readings in intelligent user interfaces
Multimodal Maps: An Agent-Based Approach
Multimodal Human-Computer Communication, Systems, Techniques, and Experiments
Finite-state multimodal integration and understanding
Natural Language Engineering
MATCH: an architecture for multimodal dialogue systems
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
A look under the hood: design and development of the first SmartWeb system demonstrator
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
The prospects for unrestricted speech input for TV content search
Proceedings of the working conference on Advanced visual interfaces
The WAMI toolkit for developing, deploying, and evaluating web-accessible multimodal interfaces
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
City browser: developing a conversational automotive HMI
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Robust understanding in multimodal interfaces
Computational Linguistics
A speech mashup framework for multimodal mobile services
Proceedings of the 2009 international conference on Multimodal interfaces
Semantic memory for pervasive architecture
ICICA'10 Proceedings of the First international conference on Information computing and applications
Modeling multimodal integration with event logic charts
Proceedings of the 14th ACM international conference on Multimodal interaction
Speech augmented multitouch interaction patterns
Proceedings of the 16th European Conference on Pattern Languages of Programs
Cognitive Memory for Semantic Agents Architecture in Robotic Interaction
International Journal of Cognitive Informatics and Natural Intelligence
Multi levels semantic architecture for multimodal interaction
Applied Intelligence
Tangible ambient intelligence with semantic agents in daily activities
Journal of Ambient Intelligence and Smart Environments
Hi-index | 0.00 |
Multimodal interfaces combining natural modalities such as speech and touch with dynamic graphical user interfaces can make it easier and more effective for users to interact with applications and services on mobile devices. However, building these interfaces remains a complex and high specialized task. The W3C EMMA standard provides a representation language for inputs to multimodal systems facilitating plug-and-play of system components and rapid prototyping of interactive multimodal systems. We illustrate the capabilities of the EMMA standard through examination of its use in a series of mobile multimodal applications for the iPhone.