Category learning through multimodality sensing
Neural Computation
Motion sketch: acquisition of visual motion guided behaviors
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Towards partners profiling in human robot interaction contexts
SIMPAR'12 Proceedings of the Third international conference on Simulation, Modeling, and Programming for Autonomous Robots
Hi-index | 0.01 |
Binding is one of the most fundamental cognitive functions, how to find the correspondence of sensations between different modalities such as vision and touch. Learning the multimodal representation of the body is supposed to be the first step toward binding since the morphological constraints on sensations during self-body-observation would make the binding problem tractable. In this paper, we address an issue of learning to match the foci of attention in vision and touch through self-body-observation. We propose the cross-anchoring Hebbian learning rule to uniquely associate double-touching and self-occlusion. Experiments with both the computer simulation and a real robot show the validity of the proposed method.