EP for Efficient Stochastic Control with Obstacles

  • Authors:
  • Thomas Mensink;Jakob Verbeek;Bert Kappen

  • Affiliations:
  • XRCE & INRIA Rhôône-Alpes, Grenoble, France, thomas.mensink@inria.fr;LEAR-INRIA Rhône-Alpes, Grenoble, France, jakob.verbeek@inria.fr;Radboud University, Nijmegen, The Netherlands, b.kappen@science.ru.nl

  • Venue:
  • Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the problem of continuous stochastic optimal control in the presence of hard obstacles. Due to the non-smooth character of the obstacles, the traditional approach using dynamic programming in combination with function approximation tends to fail. We consider a recently introduced special class of control problems for which the optimal control computation is reformulated in terms of a path integral. The path integral is typically intractable, but amenable to techniques developed for approximate inference. We argue that the variational approach fails in this case due to the non-smooth cost function. Sampling techniques are simple to implement and converge to the exact results given enough samples. However, the infinite cost associated with hard obstacles renders the sampling procedures inefficient in practice. We suggest Expectation Propagation (EP) as a suitable approximation method, and compare the quality and efficiency of the resulting control with an MC sampler on a car steering task and a ball throwing task. We conclude that EP can solve these challenging problems much better than a sampling approach.