Multiple paired forward and inverse models for motor control
Neural Networks - Special issue on neural control and robotics: biology and technology
Generative character of perception: a neural architecture for sensorimotor anticipation
Neural Networks - Special issue on organisation of computation in brain-like systems
Learning visuomotor transformations for gaze-control and grasping
Biological Cybernetics
Perception through visuomotor anticipation in a mobile robot
Neural Networks
Fast learning in networks of locally-tuned processing units
Neural Computation
Model-based learning for mobile robot navigation from the dynamicalsystems perspective
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Anticipations, Brains, Individual and Social Behavior: An Introduction to Anticipatory Systems
Anticipatory Behavior in Adaptive Learning Systems
Hi-index | 0.00 |
Visual forward models predict future visual data from the previous visual sensory state and a motor command. The adaptive acquisition of visual forward models in robotic applications is plagued by the high dimensionality of visual data which is not handled well by most machine learning and neural network algorithms. Moreover, the forward model has to learn which parts of the visual output are really predictable and which are not because they lack any corresponding part in the visual input. In the present study, a learning algorithm is proposed which solves both problems. It relies on predicting the mapping between pixel positions in the visual input and output instead of directly forecasting visual data. The mapping is learned by matching corresponding regions in the visual input and output while exploring different visual surroundings. Unpredictable regions are detected by the lack of any clear correspondence. The proposed algorithm is applied successfully to a robot camera head under additional distortion of the camera images by a retinal mapping. Two future applications of the final visual forward model are proposed, saccade learning and a task from the domain of eye-hand coordination.