Self-Organizing Feature Maps for Modeling and Control of Robotic Manipulators
Journal of Intelligent and Robotic Systems
Direct Mapping of Visual Input to Motor Torques
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 04
Towards Learning Robotic Reaching and Pointing: An Uncalibrated Visual Servoing Approach
CRV '09 Proceedings of the 2009 Canadian Conference on Computer and Robot Vision
Tracking objects with generic calibrated sensors: An algorithm based on color and 3D shape features
Robotics and Autonomous Systems
Recurrence enhances the spatial encoding of static inputs in reservoir networks
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Batch intrinsic plasticity for extreme learning machines
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
A gradient rule for the plasticity of a neuron’s intrinsic excitability
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Three-dimensional neural net for learning visuomotor coordination of a robot arm
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Pointing at something refers to orienting the hand, the arm, the head or the body in the direction of an object or an event. This skill constitutes a basic communicative ability for cognitive agents like, e.g. humanoid robots. The goal of this study is to show that approximate and, in particular, precise pointing can be learned as a direct mapping from the object's pixel coordinates in the visual field to hand positions or to joint angles. This highly nonlinear mapping defines the pose and orientation of a robot's arm. The study underlines that this is possible without calculating the object's depth and 3D position explicitly since only the direction is required. To this aim, three state-of-the-art neural network paradigms (multilayer perceptron, extreme learning machine and reservoir computing) are evaluated on real world data gathered from the humanoid robot iCub. Training data are interactively generated and recorded from kinesthetic teaching for the case of precise pointing. Successful generalization is verified on the iCub using a laser pointer attached to its hand.