Proceedings of the seventh international conference (1990) on Machine learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Metalearning and neuromodulation
Neural Networks - Computational models of neuromodulation
Computer
Meta-learning in reinforcement learning
Neural Networks
Learning behavior-selection by emotions and cognition in a multi-goal robot task
The Journal of Machine Learning Research
Affective Learning — A Manifesto
BT Technology Journal
On Affect and Self-adaptation: Potential Benefits of Valence-Controlled Action-Selection
IWINAC '07 Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks
Strategies for Affect-Controlled Action-Selection in Soar-RL
IWINAC '07 Proceedings of the 2nd international work-conference on Nature Inspired Problem-Solving Methods in Knowledge Engineering: Interplay Between Natural and Artificial Computation, Part II
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Soar-RL: integrating reinforcement learning with Soar
Cognitive Systems Research
The Neuromodulatory System: A Framework for Survival and Adaptive Behavior in a Challenging World
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Affective negotiation support systems
Journal of Ambient Intelligence and Smart Environments
Society of Mind cognitive agent architecture applied to drivers adapting in a traffic context
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Hi-index | 0.00 |
Emotion plays an important role in thinking. In this article westudy affective control of the amount of simulated anticipatorybehavior in adaptive agents using a computational model. Ourapproach is based on model-based reinforcement learning (RL) andinspired by the simulation hypothesis (Cotterill, 2001;Hesslow, 2002). The simulation hypothesis states that thinking isinternal simulation of behavior using the same sensory-motorsystems as those used for overt behavior. Here, we study theadaptiveness of an artificial agent, when action-selection bias isinduced by an affect-controlled amount of simulated anticipatorybehavior . To this end, we introduce an affect-controlledsimulation-selection mechanism that uses the predictions ofthe agent's RL model to select anticipatory behaviors forsimulation. Based on eXperiments with adaptive agents in twonondeterministic partially observable grid-worlds we conclude that(1) internal simulation has an adaptive benefit and (2) affectivecontrol can reduce the amount of simulation needed for thisbenefit. This is specifically the case if the following relationholds: positive affect decreases the amount of simulation towardssimulating the best potential neXt action, while negative affectincreases the amount of simulation towards simulating all potentialneXt actions. In essence we use artificial affect to controlmental eXploration versus eXploitations. Thus, agents"feeling positive" can think ahead in a narrow sense and free upworking memory resources, while agents "feeling negative" mustthink ahead in a broad sense and maXimize usage of working memory.Our results are consistent with several psychological findings onthe relation between affect and learning, and contribute toanswering the question of when positive versus negativeaffect is useful during adaptation.