Tracking pointing gesture in 3D space for wearable visual interfaces
Proceedings of the international workshop on Human-centered multimedia
Computational visual attention systems and their cognitive foundations: A survey
ACM Transactions on Applied Perception (TAP)
An Active Vision System for Detecting, Fixating and Manipulating Objects in the Real World
International Journal of Robotics Research
A probabilistic model of overt visual attention for cognitive robots
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Focusing computational visual attention in multi-modal human-robot interaction
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
We present a vision system for human-machine interaction based on a small wearable camera mounted on glasses. The camera views the area in front of the user, especially the hands. To evaluate hand movements for pointing gestures and to recognise object references, an approach to integrating bottom-up generated feature maps and top-down propagated recognition results is introduced. Modules for context-free focus of attention work in parallel with the hand gesture recognition. In contrast to other approaches, the fusion of the two branches is on the sub-symbolic level. This method facilitates both the integration of different modalities and the generation of auditory feedback.