Multiple paired forward-inverse models for human motor learning and control
Proceedings of the 1998 conference on Advances in neural information processing systems II
The Mathematics of Infectious Diseases
SIAM Review
Goal-Directed Property of On-line Direct Inverse Modeling
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 4 - Volume 4
Failure of motor learning for large initial errors
Neural Computation
Reinforcement learning by reward-weighted regression for operational space control
Proceedings of the 24th international conference on Machine learning
A connectionist model may shed light on neural mechanisms for visually guided reaching
Journal of Cognitive Neuroscience
Learning to Control in Operational Space
International Journal of Robotics Research
Efficient exploration and learning of whole body kinematics
DEVLRN '09 Proceedings of the 2009 IEEE 8th International Conference on Development and Learning
Learning to Make Facial Expressions
DEVLRN '09 Proceedings of the 2009 IEEE 8th International Conference on Development and Learning
Goal Babbling Permits Direct Learning of Inverse Kinematics
IEEE Transactions on Autonomous Mental Development
Hi-index | 0.01 |
We investigate the role of redundancy for exploratory learning of inverse functions, where an agent learns to achieve goals by performing actions and observing outcomes. We present an analysis of linear redundancy and investigate goal-directed exploration approaches, which are empirically successful, but hardly theorized except negative results for special cases, and prove convergence to the optimal solution. We show that the learning curves of such processes are intrinsically low-dimensional and S-shaped, which explains previous empirical findings, and finally compare our results to non-linear domains.