The Combination of Evidence in the Transferable Belief Model
IEEE Transactions on Pattern Analysis and Machine Intelligence
Uncertainty measures for evidential reasoning. II: A new measure of total uncertainty
International Journal of Approximate Reasoning
Knowledge representation: logical, philosophical and computational foundations
Knowledge representation: logical, philosophical and computational foundations
Feature-Level and Decision-Level Fusion of Noncoincidently Sampled Sensors for Land Mine Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using Dempster-Shafer's Theory of Evidence to Combine Aspects of Information Use
Journal of Intelligent Information Systems
Conceptual Graphs and Formal Concept Analysis
ICCS '97 Proceedings of the Fifth International Conference on Conceptual Structures: Fulfilling Peirce's Dream
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
Proceedings of the 5th international conference on Multimodal interfaces
Optimal multimodal fusion for multimedia data analysis
Proceedings of the 12th annual ACM international conference on Multimedia
Proceedings of the 6th international conference on Multimodal interfaces
Visual touchpad: a two-handed gestural input device
Proceedings of the 6th international conference on Multimodal interfaces
A Robust Hand Tracking for Gesture-Based Interaction of Wearable Computers
ISWC '04 Proceedings of the Eighth International Symposium on Wearable Computers
Info-fuzzy algorithms for mining dynamic data streams
Applied Soft Computing
Information fusion in data association applications
Applied Soft Computing
An adaptive multimodal biometric management algorithm
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Understanding hand gestures using approximate graph matching
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Low cost remote gaze gesture recognition in real time
Applied Soft Computing
Knowledge-Based Systems
Companion technology for multimodal interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Using the transferable belief model for multimodal input fusion in companion systems
MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction
Hi-index | 0.00 |
Considerable research has been done on using information from multiple modalities, like hands, facial gestures or speech, for better interaction between humans and computers, and many promising human-computer interfaces (HCI) have been developed in recent years. However, most of the current HCI systems have a few drawbacks: firstly, they are highly dependent on the performance of individual sensors. S econdly, the information fusion process from these sensors tends to ignore the semantic nature of the modalities, which may reinforce or clarify each other over time. Finally, they are not robust enough at representing the imprecise nature of human gestures, since individual gestures are highly ambiguous in themselves. In this paper, we propose an approach for the semantic fusion of different input modalities, based on transferable belief models. We show that this approach allows for a better representation of the ambiguity involved in recognizing gestures. Ambiguity is resolved by combining the beliefs of the individual sensors on the input information, to form new extended concepts, based on a pre-defined domain specific knowledge base, represented by conceptual graphs. We apply this technique to a multimodal system consisting of a hand gesture recognition sensor and a brain computing interface. It is shown that the technique can successfully combine individual gestures obtained from the two sensors, to form meaningful concepts and resolve ambiguity. The advantage of this approach is that it is robust even if one of the sensors is inefficient or has no input. Another important feature is its scalability, wherein more input modalities, like speech or facial gestures, can be easily integrated into the system at minimal cost, to form a comprehensive HCI interface.