Vision Based State Space Construction for Learning Mobile Robots in Multi-agent Environments

  • Authors:
  • Eiji Uchibe;Minoru Asada;Koh Hosoda

  • Affiliations:
  • -;-;-

  • Venue:
  • EWLR-6 Proceedings of the 6th European Workshop on Learning Robots
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

State space construction is one of the most fundamental issues for reinforcement learning methods to be applied to real robot tasks because they need a well-defined state space so that they can converge correctly. Especially in multi-agent environments, the problem becomes more difficult since visual information observed by a learning robot seems irrelevant to its self motion due to actions by other agents of which policies are unknown. This paper proposes a method which estimates the relationship between the learner's behaviors and the other agents' ones in the environment through interactions (observation and action) using the method of system identification to construct a state space in such an environment. In order to determine the state vectors of each agent. Akaike's Information Criterion is applied to the result of the system identification. Next, reinforcement learning based on the estimated state vectors is utilized to obtain the optimal behavior. The proposed method is applied to soccer playing physical agents, which learn to cope with a rolling ball and another moving agent. The computer simulations and the real experiments are shown and a discussion is given.