Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Reinforcement learning with hidden states
Proceedings of the second international conference on From animals to animats 2 : simulation of adaptive behavior: simulation of adaptive behavior
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Reinforcement Learning in Continuous Time and Space
Neural Computation
Model-based learning for mobile robot navigation from the dynamicalsystems perspective
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Acquiring Rules for Rules: Neuro-Dynamical Systems Account for Meta-Cognition
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Reinforcement learning of multiple tasks using parametric bias
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Hi-index | 0.00 |
There are some difficulties in applying traditional reinforcement learning algorithms to motion control tasks of robot. Because most algorithms are concerned with discrete actions and based on the assumption of complete observability of the state. This paper deals with these two problems by combining the reinforcement learning algorithm and CTRNN learning algorithm. We carried out an experiment on the pendulum swing-up task without rotational speed information. It is shown that the information about the rotational speed, which is considered as a hidden state, is estimated and encoded on the activation of a context neuron. As a result, this task is accomplished in several hundred trials using the proposed algorithm.