Making use of unelaborated advice to improve reinforcement learning: a mobile robotics approach

  • Authors:
  • David L. Moreno;Carlos V. Regueiro;Roberto Iglesias;Senén Barro

  • Affiliations:
  • Dpto. Electrónica y Computación, Universidad de Santiago de Compostela, Santiago de Compostela, Spain;Departamento Electrónica y Sistemas, Universidad de A Coruña, A Coruña, Spain;Dpto. Electrónica y Computación, Universidad de Santiago de Compostela, Santiago de Compostela, Spain;Dpto. Electrónica y Computación, Universidad de Santiago de Compostela, Santiago de Compostela, Spain

  • Venue:
  • ICAPR'05 Proceedings of the Third international conference on Advances in Pattern Recognition - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement Learning (RL) is thought to be an appropriate paradigm for acquiring control policies in mobile robotics. However, in its standard formulation (tabula rasa) RL must explore and learn everything from scratch, which is neither realistic nor effective in real-world tasks. In this article we use a new strategy, called Supervised Reinforcement Learning (SRL), that allows the inclusion of external knowledge within this type of learning. We validate it by learning a wall-following behaviour and testing it on a Nomad 200 robot. We show that SRL is able to take advantage of multiple sources of knowledge and even from partially erroneous advice, features that allow a SRL agent to make use of a wide range of prior knowledge without the need for a complex or time-consuming elaboration.