Learning a Navigation Task in Changing Environments by Multi-task Reinforcement Learning

  • Authors:
  • Axel Großmann;Riccardo Poli

  • Affiliations:
  • -;-

  • Venue:
  • EWLR-8 Proceedings of the 8th European Workshop on Learning Robots: Advances in Robot Learning
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work is concerned with practical issues surrounding the application of reinforcement learning to a mobile robot. The robot's task is to navigate in a controlled environment and to collect objects using its gripper. Our aim is to build a control system that enables the robot to learn incrementally and to adapt to changes in the environment. The former is known as multi-task learning, the latter is usually referred to as continual 'lifelong' learning. First, we emphasize the connection between adaptive state-space quantisation and continual learning. Second, we describe a novel method for multi-task learning in reinforcement environments. This method is based on constructive neural networks and uses instance-based learning and dynamic programming to compute a task-dependent agent-internal state space. Third, we describe how the learning system is integrated with the control architecture of the robot. Finally, we investigate the capabilities of the learning algorithm with respect to the transfer of information between related reinforcement learning tasks, like navigation tasks in different environments. It is hoped that this method will lead to a speed-up in reinforcement learning and enable an autonomous robot to adapt its behaviour as the environment changes.