The Combination of Evidence in the Transferable Belief Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Proceedings of the 6th international conference on Multimodal interfaces
Context based multimodal fusion
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal Interfaces: A Survey of Principles, Models and Frameworks
Human Machine Interaction
Benchmarking fusion engines of multimodal interactive systems
Proceedings of the 2009 international conference on Multimodal interfaces
Concept-based evidential reasoning for multimodal fusion in human-computer interaction
Applied Soft Computing
A companion technology for cognitive technical systems
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Hi-index | 0.00 |
Systems with multimodal interaction capabilities have gained a lot of attention in recent years. Especially so called companion systems that offer an adaptive, multimodal user interface show great promise for a natural human computer interaction. While more and more sophisticated sensors become available, current systems capable of accepting multimodal inputs (e.g. speech and gesture) still lack the robustness of input interpretation needed for companion systems. We demonstrate how evidential reasoning can be applied in the domain of graphical user interfaces in order to provide such reliability and robustness expected by users. For this purpose an existing approach using the Transferable Belief Model from the robotic domain is adapted and extended.