Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Intelligence without representation
Artificial Intelligence
Reinforcement learning architectures for animats
Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Technical Note: \cal Q-Learning
Machine Learning
Learning in embedded systems
Learning to coordinate without sharing information
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Multi-agent reinforcement learning: independent vs. cooperative agents
Readings in agents
The dynamics of reinforcement learning in cooperative multiagent systems
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
An Behavior-based Robotics
Sparse Distributed Memory
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Dynamic Programming
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Classifier fitness based on accuracy
Evolutionary Computation
Accelerating reinforcement learning by composing solutions of automatically identified subtasks
Journal of Artificial Intelligence Research
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Spatial Abstraction: Aspectualization, Coarsening, and Conceptual Classification
Proceedings of the international conference on Spatial Cognition VI: Learning, Reasoning, and Talking about Space
Multi-agent reinforcement learning for intrusion detection
ALAMAS'05/ALAMAS'06/ALAMAS'07 Proceedings of the 5th , 6th and 7th European conference on Adaptive and learning agents and multi-agent systems: adaptation and multi-agent learning
Humanoid robots learning to walk faster: from the real world to simulation and back
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
In this paper, we confront the problem of applying reinforcement learning to agents that perceive the environment through many sensors and that can perform parallel actions using many actuators as is the case in complex autonomous robots. We argue that reinforcement learning can only be successfully applied to this case if strong assumptions are made on the characteristics of the environment in which the learning is performed, so that the relevant sensor readings and motor commands can be readily identified. The introduction of such assumptions leads to strongly-biased learning systems that can eventually lose the generality of traditional reinforcement-learning algorithms. In this line, we observe that, in realistic situations, the reward received by the robot depends only on a reduced subset of all the executed actions and that only a reduced subset of the sensor inputs (possibly different in each situation and for each action) are relevant to predict the reward. We formalize this property in the so called categorizability assumption and we present an algorithm that takes advantage of the categorizability of the environment, allowing a decrease in the learning time with respect to existing reinforcement-learning algorithms. Results of the application of the algorithm to a couple of simulated realistic-] robotic problems (landmark-based navigation and the six-legged robot gait generation) are reported to validate our approach and to compare it to existing flat and generalization-based reinforcement-learning approaches.