Estimation of pointing poses for visually instructing mobile robots under real world conditions

  • Authors:
  • Christian Martin;Frank-Florian Steege;Horst-Michael Gross

  • Affiliations:
  • Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology, Ilmenau, Germany and MetraLabs GmbH - Neue Technologien und Systeme, Ilmenau, Germany;Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology, Ilmenau, Germany;Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of Technology, Ilmenau, Germany

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present an approach for directing a mobile robot under real-world conditions into a target position by means of pointing poses only. Because one important objective of our work is the development of a low-cost platform, only monocular vision at web-cam level should be employed. Our previous approach presented in Gross et al. (2006) [1], Richarz et al. (2007) [2] has been improved by several additional processing steps. Finally, a background subtraction technique and a histogram equalization have been integrated in the preprocessing stage to be able to work in environments with structured backgrounds and under variable lighting conditions. Furthermore, a discriminant analysis was used to find the most relevant input features for the pointing pose estimator. The contribution of this paper is, however, not only the presentation of an approach to estimating pointing poses in a demanding real-world scenario on a mobile robot, but also the detailed and evaluative comparison between different image-preprocessing techniques, alternative feature extraction methods, and several function approximators with the same set of test- and training data. Reasonable combinations of the different methods are tested, and for each component on the processing chain the effect on the accuracy of the target estimation is quantized. The approach presented in this paper has been implemented on the mobile interaction robot Horos to determine the performance and estimation accuracy under real-world conditions. Furthermore, we compared the accuracy of our approach with that of humans performing the same estimation task, and achieved very comparable results for the best estimator.