Learning Behaviors Models for Robot Execution Control

  • Authors:
  • Guillaume Infantes;Félix Ingrand;Malik Ghallab

  • Affiliations:
  • LAAS-CNRS, 7, Avenue du Colonel Roche, 31077 Cedex 4, Toulouse, France, email: {surname.name}@laas.fr;LAAS-CNRS, 7, Avenue du Colonel Roche, 31077 Cedex 4, Toulouse, France, email: {surname.name}@laas.fr;LAAS-CNRS, 7, Avenue du Colonel Roche, 31077 Cedex 4, Toulouse, France, email: {surname.name}@laas.fr

  • Venue:
  • Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robust execution of robotic tasks is a difficult problem. In many situations, these tasks involve complex behaviors combining different functionalities (e.g. perception, localization, motion planning and motion execution). These behaviors are often programmed with a strong focus on the robustness of the behavior itself, not on the definition of a “high level” model to be used by a task planner and an execution controller. We propose to learn behaviors models as structured stochastic processes: Dynamic Bayesian Network. Indeed, the DBN formalism allows us to learn and control behaviors with controllable parameters. We experimented our approach on a real robot, where we learned over a large number of runs the model of a complex navigation task using a modified version of Expectation Maximization for DBN. The resulting DBN is then used to control the robot navigation behavior and we show that for some given objectives (e.g. avoid failure, optimize speed), the learned DBN driven controller performs much better than the programmed controller. We also show a way to achieve efficient incremental learning of the DBN. We believe that the proposed approach remains generic and can be used to learn complex behaviors other than navigation and for other autonomous systems.