Reinforcement learning of non-Markov decision processes
Artificial Intelligence - Special volume on computational research on interaction and agency, part 2
Planning and acting in partially observable stochastic domains
Artificial Intelligence
What are the computations of the cerebellum, the basal ganglia and the cerebral cortex?
Neural Networks - Special issue on organisation of computation in brain-like systems
The Handbook of Brain Theory and Neural Networks
The Handbook of Brain Theory and Neural Networks
The Neuromodulatory System: A Framework for Survival and Adaptive Behavior in a Challenging World
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Predictive models in the brain
Connection Science
Goal-directed feature learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Hi-index | 0.00 |
The brain is able to perform actions based on an adequate internal representation of the world, where task-irrelevant features are ignored and incomplete sensory data are estimated. Traditionally, it is assumed that such abstract state representations are obtained purely from the statistics of sensory input for example by unsupervised learning methods. However, more recent findings suggest an influence of the dopaminergic system, which can be modeled by a reinforcement learning approach. Standard reinforcement learning algorithms act on a single layer network connecting the state space to the action space. Here, we involve in a feature detection stage and a memory layer, which together, construct the state space for a learning agent. The memory layer consists of the state activation at the previous time step as well as the previously chosen action. We present a temporal difference based learning rule for training the weights from these additional inputs to the state layer. As a result, the performance of the network is maintained both, in the presence of task-irrelevant features, and at randomly occurring time steps during which the input is invisible. Interestingly, a goal-directed forward model emerges from the memory weights, which only covers the state-action pairs that are relevant to the task. The model presents a link between reinforcement learning, feature detection and forward models and may help to explain how reward systems recruit cortical circuits for goal-directed feature detection and prediction.