Standing balance control using a trajectory library
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Hi-index | 0.00 |
We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady-state policies for deterministic time-invariant discrete time control problems with continuous states and actions often found in robotics. In this paper, we describe our approach and provide initial results on several simulated robotics problems.