Pure reactive behavior learning using Case Based Reasoning for a vision based 4-legged robot

  • Authors:
  • Jose Manuel Peula;Cristina Urdiales;Ignacio Herrero;Isabel Sánchez-Tato;Francisco Sandoval

  • Affiliations:
  • Grupo ISIS, Dpt. Tecnologia Electronica, University of Malaga, ETSI Telecomunicacion, Campus de Teatinos, 29071, Malaga, Spain;Grupo ISIS, Dpt. Tecnologia Electronica, University of Malaga, ETSI Telecomunicacion, Campus de Teatinos, 29071, Malaga, Spain;Grupo ISIS, Dpt. Tecnologia Electronica, University of Malaga, ETSI Telecomunicacion, Campus de Teatinos, 29071, Malaga, Spain;Grupo ISIS, Dpt. Tecnologia Electronica, University of Malaga, ETSI Telecomunicacion, Campus de Teatinos, 29071, Malaga, Spain;Grupo ISIS, Dpt. Tecnologia Electronica, University of Malaga, ETSI Telecomunicacion, Campus de Teatinos, 29071, Malaga, Spain

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A traditional problem in robotics is adaptation of developed algorithms to different platforms and sensors, as each of them has its specifics and associated errors. Hierarchical control architectures deal with the problem through division of the system into layers, where deliberative processing is performed at high level and low level layers are in charge of dealing with reactive behaviors and adaptation to platform and sensor hardware. Specifically, approaches based on the Emergent Behavior Theory rely on building high level behaviors by combining simpler ones that provide intuitive reactive responses to sensory instance. This combination is controlled by higher layers in order to obtain more complex behaviors. Unfortunately, low level behaviors might be difficult to develop, specially when dealing with legged robots and sensors like video cameras, where resulting motion is heavily influenced by the robot kinematics and dynamics and sensory input is affected by external conditions, transformations, distortions, noise and motion itself (e.g. the camera bouncing problem). In this paper, we propose a new learning based method to solve most of these problems. It basically consists of creating a reactive behavior by supervisedly driving a robot for a time. During that time, its visual input is reactively associated to commands sent to the robot through a Case Based Reasoning (CBR) behavior builder. Thus, the robot learns what the person would do in its situation to achieve a certain goal. This approach has two advantages. First, humans are particularly good at adapting and taking into account the specifics of a given mobile after some use. Thus, kinematics and dynamics are absorbed into the casebase along with how the person thinks they should be dealt with by that particular robot. Similarly, commands are associated to the input sensor as is, so systematic errors in sensors and motors are also implicitly learnt in the casebase (camera bouncing, distorsions, noise ...). Also, different reactive strategies to reach a simple goal can be programmed into the robot by showing, rather than by coding. This is particularly useful because some reactive behaviors are ill-fitted to equations. Naturally, CBR allows online adaptation to potential changes after supervised training, so the system is able to learn by itself when working autonomously too. The proposed system has been successfully tested in a 4-legged Aibo robot in a controlled environment. To prove that it is adequate to create low level layers for hybrid architectures, two different CBR reactive behaviors have been tested and combined into an emergent one. A deliberative layer could be used to extent the system to more complex environments.