Robot Dynamics and Control
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Sparse on-line Gaussian processes
Neural Computation
Sparse Online Greedy Support Vector Regression
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Incremental Online Learning in High Dimensions
Neural Computation
Neural Computation
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Online incremental learning of inverse dynamics incorporating prior knowledge
AIS'11 Proceedings of the Second international conference on Autonomous and intelligent systems
Hi-index | 0.00 |
The increasing complexity of modern robots makes it prohibitively hard to accurately model such systems as required by many applications. In such cases, machine learning methods offer a promising alternative for approximating such models using measured data. To date, high computational demands have largely restricted machine learning techniques to mostly offline applications. However, making the robots adaptive to changes in the dynamics and to cope with unexplored areas of the state space requires online learning. In this paper, we propose an approximation of the support vector regression (SVR) by sparsification based on the linear independency of training data. As a result, we obtain a method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques, such as v-SVR, Gaussian process regression (GPR) and locally weighted projection regression (LWPR).