The design of a model-based multimedia interaction manager
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: intelligent multimedia
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Decision making in intelligent user interfaces
Proceedings of the 2nd international conference on Intelligent user interfaces
Negotiation for automated generation of temporal multimedia presentations
MULTIMEDIA '96 Proceedings of the fourth ACM international conference on Multimedia
A standard reference model for intelligent multimedia presentation systems
Computer Standards & Interfaces
Adaptation in IMMPS as a decision making process
Computer Standards & Interfaces
Customizing Graphics for Tiny Displays of Mobile Devices
Personal and Ubiquitous Computing
XISL: a language for describing multimodal interaction scenarios
Proceedings of the 5th international conference on Multimodal interfaces
Proceedings of the 5th international conference on Multimodal interfaces
ICARE: a component-based approach for the design and development of multimodal interfaces
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Finite-state multimodal integration and understanding
Natural Language Engineering
Multimodal output specification / simulation platform
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Human-Computer Interaction
A framework for the combination and characterization of output modalities
DSV-IS'00 Proceedings of the 7th international conference on Design, specification, and verification of interactive systems
A toolkit of mechanism and context independent widgets
DSV-IS'00 Proceedings of the 7th international conference on Design, specification, and verification of interactive systems
A platform for output dialogic strategies in natural multimodal dialogue systems
Proceedings of the 12th international conference on Intelligent user interfaces
Natural multimodal dialogue systems: a configurable dialogue and presentation strategies component
Proceedings of the 9th international conference on Multimodal interfaces
IHM '07 Proceedings of the 19th International Conference of the Association Francophone d'Interaction Homme-Machine
Thematic issue on contribution of Artificial Intelligence to Ambient Intelligence
Journal of Ambient Intelligence and Smart Environments
Task migration in a pervasive multimodal multimedia computing system for visually-impaired users
GPC'07 Proceedings of the 2nd international conference on Advances in grid and pervasive computing
Human-computer interaction in next generation ambient intelligent environments
Intelligent Decision Technologies - Special issue on knowledge-based environments and services in human-computer interaction
An information presentation method based on tree-like super entity component
Journal of Systems and Software
Thematic issue on contribution of Artificial Intelligence to Ambient Intelligence
Journal of Ambient Intelligence and Smart Environments
Supervision of task-oriented multimodal rendering for VR applications
EGVE'07 Proceedings of the 13th Eurographics conference on Virtual Environments
Companion technology for multimodal interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Adaptive probabilistic fission for multimodal systems
Proceedings of the 24th Australian Computer-Human Interaction Conference
Soa2mSituation: an interaction situation model for the multimodal web
Proceedings of the 3rd International Conference on Web Intelligence, Mining and Semantics
A multimedia presentation system using a 3D gesture interface in museums
Multimedia Tools and Applications
Hi-index | 0.00 |
Intelligent multimodal presentation of information aims at using several communication modalities to produce the most relevant user outputs. This problem involves different concepts related to information structure, interaction components (modes, modalities, devices) and context. In this paper we study the influence of interaction context on system outputs. More precisely, we propose a conceptual model for intelligent multimodal presentation of information. This model, called WWHT, is based on four concepts: "What", "Which", "How" and "Then". These concepts describe the life cycle of a multimodal presentation from its "birth" to its "death" including its evolution. On the basis of this model, we present the ELOQUENCE software platform for the specification, the simulation and the execution of output multimodal systems. Finally, we describe two applications of this framework: the first one concerns the simulation of an incoming call in an intelligent mobile phone and the second one is related to a task of marking out a target on the ground in a fighter plane cockpit.