A design space for multimodal systems: concurrent processing and data fusion
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Maintaining knowledge about temporal intervals
Communications of the ACM
Pseudo-Haptic Feedback: Can Isometric Input Devices Simulate Force Feedback?
VR '00 Proceedings of the IEEE Virtual Reality 2000 Conference
ScienceSpace: virtual realities for learning complex and abstract scientific concepts
VRAIS '96 Proceedings of the 1996 Virtual Reality Annual International Symposium (VRAIS 96)
A framework for the intelligent multimodal presentation of information
Signal Processing - Special section: Multimodal human-computer interfaces
A framework for the combination and characterization of output modalities
DSV-IS'00 Proceedings of the 7th international conference on Design, specification, and verification of interactive systems
Supervision of 3D multimodal rendering for protein-protein virtual docking
EGVE'08 Proceedings of the 14th Eurographics conference on Virtual Environments
Hi-index | 0.00 |
This article addresses the question of integrating multimodal rendering in Virtual Reality applications. It exposes first the interest of multimedia intelligent systems to improve human activity in Virtual Environments. Then it details the conception of a software module in charge of supervising multimodal information rendering, depending on the interaction and its context. From existing psychophysical studies and concrete applications, we propose a model, an architecture and a decision process. Finally a first implementation is presented to validate the core of the simulator and show the adaptability of its knowledge base.