Skin Detection in Video under Changing Illumination Conditions
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 1
Gaze-based interaction in various environments
VNBA '08 Proceedings of the 1st ACM workshop on Vision networks for behavior analysis
Tekkotsu: a framework for AIBO cognitive robotics
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 4
Static and dynamic hand-gesture recognition for augmented reality applications
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
Gesture-based interaction with voice feedback for a tour-guide robot
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
This paper presents a new framework for multimodal data processing in real-time. This framework comprises modules for different input and output signals and was designed for human-human or human-robot interaction scenarios. Single modules for the recording of selected channels like speech, gestures or mimics can be combined with different output options (i.e. robot reactions) in a highly flexible manner. Depending on the included modules, online as well as offline data processing is possible. This framework was used to analyze human-human interaction to gain insights on important factors and their dynamics. Recorded data comprises speech, facial expressions, gestures and physiological data. This naturally produced data was annotated and labeled in order to train recognition modules which will be integrated into the existing framework. The overall aim is to create a system that is able to recognize and react to those parameters that humans take into account during interaction. In this paper, the technical implementation and application in a human-human and a human-robot interaction scenario is presented.