Application of a general learning algorithm to the control of robotic manipulators
International Journal of Robotics Research
The Strength of Weak Learnability
Machine Learning
Using associative content-addressable memories to control robots
Neural networks for control
Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Locally Weighted Learning for Control
Artificial Intelligence Review - Special issue on lazy learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Interactive control of avatars animated with human motion data
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Automated Derivation of Primitives for Movement Classification
Autonomous Robots
Grafting: fast, incremental feature selection by gradient descent in function space
The Journal of Machine Learning Research
Robust Real-Time Face Detection
International Journal of Computer Vision
A spatio-temporal extension to Isomap nonlinear dimension reduction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces
ACM SIGGRAPH 2004 Papers
Learning from observation using primitives
Learning from observation using primitives
Behavior planning for character animation
Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation
Learning Overcomplete Representations
Neural Computation
Properties of Synergies Arising from a Theory of Optimal Motor Behavior
Neural Computation
Hi-index | 0.00 |
The computational complexities arising in motor control can be ameliorated through the use of a library of motor synergies. We present a new model, referred to as the Greedy Additive Regression (GAR) model, for learning a library of torque sequences, and for learning the coefficients of a linear combination of sequences minimizing a cost function. From the perspective of numerical optimization, the GAR model is interesting because it creates a library of "local features"---each sequence in the library is a solution to a single training task---and learns to combine these sequences using a local optimization procedure, namely, additive regression. We speculate that learners with local representational primitives and local optimization procedures will show good performance on nonlinear tasks. The GAR model is also interesting from the perspective of motor control because it outperforms several competing models. Results using a simulated two-joint arm suggest that the GAR model consistently shows excellent performance in the sense that it rapidly learns to perform novel, complex motor tasks. Moreover, its library is overcomplete and sparse, meaning that only a small fraction of the stored torque sequences are used when learning a new movement. The library is also robust in the sense that, after an initial training period, nearly all novel movements can be learned as additive combinations of sequences in the library, and in the sense that it shows good generalization when an arm's dynamics are altered between training and test conditions, such as when a payload is added to the arm. Lastly, the GAR model works well regardless of whether motor tasks are specified in joint space or Cartesian space. We conclude that learning techniques using local primitives and optimization procedures are viable and potentially important methods for motor control and possibly other domains, and that these techniques deserve further examination by the artificial intelligence and cognitive science communities.