Learning in embedded systems
Efficient model-based exploration
Proceedings of the fifth international conference on simulation of adaptive behavior on From animals to animats 5
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Hierarchical behaviours: getting the most bang for your bit
ECAL'09 Proceedings of the 10th European conference on Advances in artificial life: Darwin meets von Neumann - Volume Part II
Cognitive Developmental Robotics: A Survey
IEEE Transactions on Autonomous Mental Development
Multilevel Darwinist Brain (MDB): Artificial Evolution in a Cognitive Architecture for Real Robots
IEEE Transactions on Autonomous Mental Development
Intrinsically Motivated Hierarchical Skill Learning in Structured Environments
IEEE Transactions on Autonomous Mental Development
Hi-index | 0.00 |
The design and implementation of a robot brain often requires making decisions between different modules with similar functionality. Many implementations and components are easy to create or can be downloaded, but it is difficult to assess which combination of modules work well and which does not. This paper discusses a reinforcement learning mechanism where the robot is choosing between the different components using empirical feedback and optimization criteria. With the interval estimation algorithm the robot deselects poorly functioning modules and retains only the best ones. A discount factor ensures that the robot keeps adapting to new circumstances in the real world. This allows the robot to adapt itself continuously on the architecture level and also allows working with large development teams creating several different implementations with similar functionalities to give the robot biggest chance to solve a task. The architecture is tested in the RoboCup@Home setting and can handle failure situations.