Probabilistic action planning for active scene modeling in continuous high-dimensional domains

  • Authors:
  • Robert Eidenberger;Thilo Grundmann;Raoul Zoellner

  • Affiliations:
  • Department of Computational Perception, Johannes Kepler University Linz, Linz, Austria;Department of Intelligent Autonomous Systems, Information and Communication, Siemens AG, Munich, Germany;Department of Intelligent Autonomous Systems, Information and Communication, Siemens AG, Munich, Germany

  • Venue:
  • ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In active perception systems for scene recognition the utility of an observation is determined by the information gain in the probability distribution over the state space. The goal is to find a sequence of actions which maximizes the system knowledge at low resource costs. Most current approaches focus either on optimizing the determination of the payoff neglecting the costs or develop sophisticated planning strategies for simple reward models. This paper presents a probabilistic framework which provides an approach for sequential decision making under model and state uncertainties in continuous and high-dimensional domains. The probabilistic planner, realized as a partially observable Markov decision process (POMDP), reasons by considering both, information theoretic quality criteria of probability distributions and control action costs. In an experimental setting an autonomous service robot uses active perception techniques for efficient object recognition in complex multi-object scenarios, facing the difficulties of object occlusion. Due to the high demand on real time applicability the probability distributions are represented by mixtures of Gaussian to allow fast, parametric computation.