A theory of generalized inverses applied to robotics
International Journal of Robotics Research
Using expectation-maximization for reinforcement learning
Neural Computation
Artificial Intelligence Review - Special issue on lazy learning
Locally Weighted Learning for Control
Artificial Intelligence Review - Special issue on lazy learning
Multiple paired forward-inverse models for human motor learning and control
Proceedings of the 1998 conference on Advances in neural information processing systems II
Advanced Robotics: Redundancy and Optimization
Advanced Robotics: Redundancy and Optimization
Modelling and Control of Robot Manipulators
Modelling and Control of Robot Manipulators
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Scalable Techniques from Nonparametric Statistics for Real Time Robot Learning
Applied Intelligence
Statistical Learning for Humanoid Robots
Autonomous Robots
Introduction to Stochastic Search and Optimization
Introduction to Stochastic Search and Optimization
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Incremental Online Learning in High Dimensions
Neural Computation
Adaptive Approximation Based Control: Unifying Neural, Fuzzy and Traditional Adaptive Approximation Approaches (Adaptive and Learning Systems for Signal Processing, Communications and Control Series)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Reinforcement learning by reward-weighted regression for operational space control
Proceedings of the 24th international conference on Machine learning
A self-organizing neural model of motor equivalent reaching and tool use by a multijoint arm
Journal of Cognitive Neuroscience
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Probabilistic Inference for Fast Learning in Control
Recent Advances in Reinforcement Learning
Gaussian process dynamic programming
Neurocomputing
Model-free reinforcement learning as mixture learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Reinforcement learning for robot soccer
Autonomous Robots
A novel method for learning policies from constrained motion
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Robust constraint-consistent learning
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Control of redundant robots using learned models: an operational space control approach
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Robotics and Autonomous Systems
A Human-Robot Collaborative Reinforcement Learning Algorithm
Journal of Intelligent and Robotic Systems
Task-specific generalization of discrete and periodic dynamic movement primitives
IEEE Transactions on Robotics
Learning Non-linear Multivariate Dynamics of Motion in Robotic Manipulators
International Journal of Robotics Research
A Generalized Path Integral Control Approach to Reinforcement Learning
The Journal of Machine Learning Research
Online incremental learning of inverse dynamics incorporating prior knowledge
AIS'11 Proceedings of the Second international conference on Autonomous and intelligent systems
On-line regression algorithms for learning mechanical models of robots: A survey
Robotics and Autonomous Systems
A combined reactive and reinforcement learning controller for an autonomous tracked vehicle
Robotics and Autonomous Systems
Reinforcement learning in robotics: A survey
International Journal of Robotics Research
Behaviour generation in humanoids by learning potential-based policies from constrained motion
Applied Bionics and Biomechanics
Hi-index | 0.00 |
One of the most general frameworks for phrasing control problems for complex, redundant robots is operational-space control. However, while this framework is of essential importance for robotics and well understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in the face of modeling errors, which are inevitable in complex robots (e.g. humanoid robots). In this paper, we suggest a learning approach for operational-space control as a direct inverse model learning problem. A first important insight for this paper is that a physically correct solution to the inverse problem with redundant degrees of freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component of our work is based on the insight that many operational-space controllers can be understood in terms of a constrained optimal control problem. The cost function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational-space controller. From the machine learning point of view, this learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees-of-freedom robot arm are used to illustrate the suggested approach. The application to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposed method works in the setting of learning resolved motion rate control on a real, physical Mitsubishi PA-10 medical robotics arm.